Project

General

Profile

D64 OpenAIRE Maintenance Report » History » Version 45

Jochen Schirrwagen, 15/12/2016 02:57 PM
metadata (publications and datasets)

1 1 Marek Horst
h1. D6.4 OpenAIRE Maintenance Report (v1, 9th of December 2016)
2
3
{{toc}}
4
5
h2. Overview
6
7 37 Marek Horst
This document contains information about the deployment, status of OpenAIRE2020 services and content, and the history of major modifications of the system hosted at ICM, Poland. Zenodo repository is hosted at CERN, Switzerland.
8 1 Marek Horst
9
The official maintenance of the OpenAIRE2020 services began on January 1st, 2015 when the project started.
10
11 37 Marek Horst
*TODO: elaborate the following*
12 18 Alessia Bardi
The deliverable will consist in a high-level report on the status of 
13
* OpenAIRE workflows (CNR),
14
* services (ICM), 
15 30 Marek Horst
* and Information Space (UNIBI).
16 1 Marek Horst
17 18 Alessia Bardi
h2. Information Space
18 1 Marek Horst
19 18 Alessia Bardi
Brief description of the data model and status of content, i.e. numbers about data providers, their typology, publications, datasets, links... etc.
20
@UNIBI: your contribution is needed here
21 1 Marek Horst
22 44 Jochen Schirrwagen
The OpenAIRE [[Core Data Model]] comprises the following interlinked entites: **results** ( in form of publications, datasets and patents), **persons**, **organisations**, **funders**, **funding streams**, **projects**, and **data sources** (in form of institutional, thematic, and data repositories, Current Research Information Systems (CRIS), thematic and national aggregators, publication catalogues and entity registries)).
23 43 Jochen Schirrwagen
24
25 20 Jochen Schirrwagen
h3. Data Provider
26 1 Marek Horst
27 20 Jochen Schirrwagen
h3. Content Status
28
29 18 Alessia Bardi
|*data type*|*count*|
30 45 Jochen Schirrwagen
|publication metadata|17460368|
31
|dataset metadata|3226586|
32 1 Marek Horst
|projects|653268|
33
|organizations|64591|
34 18 Alessia Bardi
|authors|16188328|
35 20 Jochen Schirrwagen
|EuropePMC XML fulltext|1574358|
36
|PDF fulltext|2227458|
37 1 Marek Horst
38 29 Marek Horst
|*inference type*|*count*|
39
|datasets matched|88610|
40
|projects matched|351302|
41
|software urls references|21481|
42
|protein db references|196462|
43
|research initiatives references|7294|
44
|documents classified|2405869|
45
|similar documents found|164602477|
46
|citations matched by reference text|11390293|
47
|citations matched by id|3929053|
48 1 Marek Horst
49
Tables based on IIS report generated on November 20, 2016 for OpenAIRE production infrastructure.
50 30 Marek Horst
51
h3. Zenodo Content Status
52
53
|*data type*|*count or size*|
54
|records|96771|
55
|managed files|181778|
56
|files total size|8TB|
57
58
59 18 Alessia Bardi
60
h2. [[OpenAIRE workflows]]
61
62
The OpenAIRE aggregation system is based on the "D-NET software toolkit":http://www.d-net.research-infrastructures.eu/. D-NET is a service-oriented framework specifically designed to support developers at constructing custom aggregative infrastructures in a cost-effective way. D-NET offers data management services capable of providing access to different kinds of external data sources, storing and processing information objects of any data models, converting them into common formats, and exposing information objects to third-party applications through a number of standard access API. Most importantly, D-NET offers infrastructure enabling services that facilitate the construction of domain-specific aggregative infrastructures by selecting and configuring the needed services and easily combining them to form autonomic data processing workflows. 
63
64
The Enabling Layer contains the Services supporting the application framework. These provide functionalities such as Service registration, discovery, subscription and notification and data transfer mechanisms through ResultSet Services. Most importantly, these Services are configured to orchestrate Services of other layers to fulfil the OpenAIRE specific requirements and implement the *[[OpenAIRE workflows]]*.
65
66
h2. Services
67
68
How the system is maintained
69
70
h3. Software life-cycle
71
72
h4. D-NET services
73
74 1 Marek Horst
The D-NET services are shipped as web applications and deployed on the tomcat application server (v7.0.52) on three distinct systems: dev, beta, production. To support the deployment process all the software artifacts are automatically built on a continuous integration system ("Jenkins":https://ci.research-infrastructures.eu) and hosted on a dedicated maven repository ("nexus":http://maven.research-infrastructures.eu/nexus), while webapp builds are made available via "http server":http://ppa.research-infrastructures.eu/ci_upload. The mentioned tools supporting the software lifecycle are maintained by CNR.
75
76
The D-NET services deployment is performed in subsequent stages:
77
* The development infrastructure plays the role of test bench for the integration of the software developed by different institutions. It is maintained by CNR and runs mostly un-released code and contains mock or subsets of the data available on the production system.
78
* The beta infrastructure runs only released code. It is maintained by ICM and consists of the final integration stage where all the system workflows are tested on the real data (not necessarily the same data as the production system) before making them available to the production system. Although the software running on the beta system is not yet production ready, its portal is publicly accessible in order to showcase new features and data.
79
* The production infrastructure is maintained by ICM and runs only code that was tested on the beta system.
80
81 18 Alessia Bardi
D-NET backend services are packed in four different web applications, each of them running on a dedicated tomcat instance.
82 1 Marek Horst
83 18 Alessia Bardi
h4. Information Inference Service
84 1 Marek Horst
85 26 Marek Horst
"Information Inference Service":https://github.com/openaire/iis versioning and deployment is described on "IIS versioning and deployment":https://issue.openaire.research-infrastructures.eu/projects/openaire/wiki/IIS_versioning_and_deployment wiki page. 
86
87
Formerly IIS was being deployed on CDH4 cluster. Since October 1st, 2015 dedicated "cdh5 branch":https://github.com/openaire/iis/commits/cdh5 was created where new SPARK modules were introduced and existing modules were optimized. On November 20, 2016 for the first time all inferences in production infrastructure were generated by IIS deployed on new CDH5 OCEAN cluster. Both stability and major performance increase were noticed, inference generation time decreased from over 2 days to 12 hours.
88 1 Marek Horst
89 18 Alessia Bardi
h4. Portal
90 1 Marek Horst
91 23 Marek Horst
OpenAIRE portal is hosted at ICM. It uses Joomla! 3.6.2, a free dynamic portal engine and content management system (CMS).
92
93
The Joomla! depends on other upstream applications:
94
95
* Apache 2.4.7
96
* PHP 5.5.9
97
* MySQL 5.5.53
98
* OpenLDAP 2.4.31
99 1 Marek Horst
100 18 Alessia Bardi
h4. Zenodo
101 1 Marek Horst
102
Zenodo repository employs an instance of the Invenio software, developed by CERN.
103
104 31 Marek Horst
Repository is deployed in a production system (https://zenodo.org) and a QA system (https://sandbox.zenodo.org). In total the two systems are running on 30 VMs hosted in CERNs OpenStack infrastructure. All machines are configured using Puppet and are running on top of CERN CentOS 7.2.
105
106
Zenodo/Invenio depends on the following applications:
107
- HAProxy for load balancing
108
- Nginx for serving static content and proxying request to application server
109
- UWSGI as application server for Zenodo/Invenio application
110
- Redis for memory cache
111
- RabbitMQ as message broker
112
- Celery as distributed task queue
113
- PostgreSQL as database
114
- Elasticsearch as search engine
115
116
Deployment process is described at http://zenodo.readthedocs.io/projectlifecycle.html#release-process
117 1 Marek Horst
118 33 Marek Horst
See https://github.com/zenodo/zenodo/commits/production for changes to Zenodo production system (does not include changes to Invenio modules).
119
120 35 Marek Horst
Zenodo repository was relaunched on Invenio v3 alpha on September 12, 2016.
121
122 18 Alessia Bardi
h3. Infrastructure services
123 1 Marek Horst
124 18 Alessia Bardi
Because OpenAIRE2020 services are a continuation and incremental extension of the services already present that resulted from OpenAIRE+ project, so they are still hosted on that same machines. More details are available at "OpenAIRE+ WP5 Maintenance Report":http://wiki.openaire.eu/xwiki/bin/view/OpenAIREplus%20Specific/WP5%20Maintenance%20Report.
125 1 Marek Horst
126 18 Alessia Bardi
h4. Hadoop clusters
127 1 Marek Horst
128 18 Alessia Bardi
h5. DM hadoop cluster
129 1 Marek Horst
130
CDH version: @cdh4.3.1@
131
132
h5. IIS hadoop cluster
133
134 24 Marek Horst
There were two IIS clusters deployed:
135
* old CDH4 IIS cluster, version @cdh4.3.1@, in operation until December 9, 2016 
136 39 Marek Horst
* new CDH5 IIS cluster deployed on March 22, 2016 in OCEAN infrastructure, supports MRv2 on YARN and SPARK
137 1 Marek Horst
138 24 Marek Horst
CDH5 cluster version history:
139
** @5.5.2@ deployment on March 22, 2016
140 27 Marek Horst
** @5.5.2 -> 5.7.5@ upgrade on November 30, 2016
141
** @5.7.5 -> 5.9.0@ upgrade on December 8, 2016
142 24 Marek Horst
143 1 Marek Horst
h4. Databases
144
145 28 Marek Horst
|_database type_|_usage_|_version_|
146
| postgress | statistics | @9.1.23@ |
147 24 Marek Horst
| postgress | DNet services | @9.3.14@ |
148
| mongodb | DNet services | @3.2.6@ |
149
| virtuoso | LOD | @7.2@ |
150 1 Marek Horst
151 41 Marek Horst
h4. Piwik analytics platform
152 18 Alessia Bardi
153 24 Marek Horst
Currently deployed Piwik version: @2.17.1@, since December 6, 2016.
154
155 40 Marek Horst
h4. ownCloud filesync platform
156 24 Marek Horst
157 40 Marek Horst
Deployed at https://box.openaire.eu. Current version: @8.2.7@, since October 19, 2016.
158 18 Alessia Bardi
159 1 Marek Horst
h3. Architectural changes
160 22 Marek Horst
161 1 Marek Horst
[[D64_Servers_Administration_Operations_Changelog|Change Log for servers administration operations]]
162 18 Alessia Bardi
163
h4. Introducing CDH5 IIS cluster hosted in OCEAN infrastructure
164 1 Marek Horst
165
Slave node specification:
166
* Huawei RH1288 V3
167
* CPU: 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores, 48 threads)
168
* RAM: 128GB
169
* HDD: 4x SATA 6TB 7.2K RPM (HDFS)
170
171
Cluster summary (16 slaves):
172
* CPU: 384 cores, 768 threads
173
* RAM: 2048GB
174
* HDD: 384TB (HDFS)
175
176
YARN available resources:
177
* vcores: 640
178
* memory: 1.44TB
179
* HDFS: 344TB
180 18 Alessia Bardi
181 1 Marek Horst
h4. Incorporating resources from old CDH4 IIS cluster into existing DM CDH4 cluster
182
183 25 Marek Horst
This task was possible after shutting down old IIS CDH4 cluster what happened on December 9, 2016.
184 1 Marek Horst
185 25 Marek Horst
h4. Deploying DNet postgress and mongodb databases on separate machines
186
187
Separating @openaire-services@ database instances into dedicated ones (since June 27, 2016):
188
* @openaire-services-postgresql@
189
* @openaire-services-mongodb@
190
191 34 Marek Horst
h4. Updating Zenodo repository infrastructure at CERN
192 1 Marek Horst
193 34 Marek Horst
Several architectural changes were introduced in CERN's infrastructure:
194
* changed storage backend from OpenAFS to CERN EOS (18 PB disk cluster) for better scalability
195
* changed from self-managed MySQL database to CERN Database Team managed PostgreSQL database
196
* deployment of Elasticsearch clusters (6 VMs)
197
* SLC to CentOS 7 on all 30 VMs
198
199 1 Marek Horst
h3. System downtimes
200
201 36 Marek Horst
h4. @ICM
202
203 1 Marek Horst
* [planned] November 14, 2016, 2 hours. #2423 dealing with Linux Dirty COW vulnerability: kernel upgrade, OpenAIRE services restart.
204 36 Marek Horst
205
h4. @CERN
206
207
* [unplanned] September 16, 2016, 3 hours. Preparation of a new load balancer caused an automatic update of CERN outerperimeter firewall that automatically closed access to the operational load balancers.
208
* [planned] September 12, 2016, 8 hours. Complete migration from old infrastructure to new infrastructure
209
* minor incidents until September 12, 2016 due to overload of the legacy system