Project

General

Profile

Actions

OpenAIRE Research Graph » History » Revision 5

« Previous | Revision 5/36 (diff) | Next »
Alessia Bardi, 05/11/2021 01:12 PM


The OpenAIRE Research Graph

The OpenAIRE Research Graph is one of the largest open scholarly record collections worldwide, key in fostering Open Science and establishing its practices in the daily research activities.
Conceived as a public and transparent good, populated out of data sources trusted by scientists, the Graph aims at bringing discovery, monitoring, and assessment of science back in the hands of the scientific community.

Imagine a vast collection of research products all linked together, contextualised and openly available. For the past ten years OpenAIRE has been working to gather this valuable record. It is a massive collection of metadata and links between scientific products such as articles, datasets, software, and other research products, entities like organisations, funders, funding streams, projects, communities, and data sources.

As of today, the OpenAIRE Research Graph aggregates around 450Mi metadata records with links collecting from 10K data sources trusted by scientists, including:
  • Repositories registered in OpenDOAR or re3data.org
  • Open Access journals registered in DOAJ
  • Crossref
  • Unpaywall
  • ORCID
  • Microsoft Academic Graph
  • Datacite

After cleaning, deduplication, enrichment and full-text mining processes, the graph is analysed to produce statistics for OpenAIRE MONITOR (https://monitor.openaire.eu), the Open Science Observatory (https://osobservatory.openaire.eu), made discoverable via OpenAIRE EXPLORE (https://explore.openaire.eu) and programmatically accessible as described at https://develop.openaire.eu.
Json dumps are also published on Zenodo.

TODO: image of high-level data model (entities and semantic relationships, we can draw here: https://docs.google.com/drawings/d/1c4s7Pk2r9NgV_KXkmX6mwCKBIQ-yK3_m6xsxB-3Km1s/edit)

Graph Data Dumps

In order to facilitate users, different dumps are available. All are available under the Zenodo community called OpenAIRE Research Graph.
Here we provide detailed documentation about the full dump:

Json schema
FAQ

Graph provision processes

Deduplication business logic

Deduplication business logic for research results

Metadata records about the same scholarly work can be collected from different providers. Each metadata record can possibly carry different information because, for example, some providers are not aware of links to projects, keywords or other details. Another common case is when OpenAIRE collects one metadata record from a repository about a pre-print and another record from a journal about the published article. For the provision of statistics, OpenAIRE must identify those cases and “merge” the two metadata records, so that the scholarly work is counted only once in the statistics OpenAIRE produces.

Duplicates among research results are identified among results of the same type (publications, datasets, software, other research products). If two duplicate results are aggregated one as a dataset and one as a software, for example, they will never be compared and they will never be identified as duplicates.
OpenAIRE supports different deduplication strategies based on the type of results.

Methodology overview

The deduplication process can be divided into two different phases:
  • Candidate identification (clustering)
  • Candidate matching (blocking)

The implementation of each phase is different based on the type of results that are being processed.

Strategy for publications

TODO: UPDATE

Candidate identification (clustering)
Due to the high number of metadata records collected by OpenAIRE, it would not be feasible to compute all possible comparisons between all metadata records.
The goal of this phase is to limit the number of comparisons by creating groups (or clusters) of records that are likely “similar”. Every record can be added to more than one group. The idea is that we do not need to make comparisons between two publications whose title is completely different.
The decision of inclusion in a group is performed by 3 clustering functions (see Section “Clustering Functions” for details about each clustering function) that works on titles and DOIs in order to create clusters:
whose publications have similar titles (clustering functions “Suffixprefix” and “ngrampairs”);
whose publications have the same DOIs even if it is written in lower/upper/mixed case letters (clustering functions “lowercase”).

Candidate matching (blocking)
Once the clusters have been composed, the algorithm proceeds with the comparisons.
Still, the number of records in one cluster may be too high to be feasible to compute all possible comparisons within one cluster, hence we have introduced the concept of “sliding window”.
With this mechanism, a window of a certain size (currently set at 200) is slid over the group and only records into the window are compared with each other. In order to maximize the probability for duplicated records to fall within the sliding window bounds, records in the cluster are also ordered based on the values of some of their attributes. Specifically, publication metadata records of each cluster are ordered lexicographically on a normalized version of their titles.
Each record in the sliding window is compared to all other records in the sliding window. Comparisons are driven by a decisional tree that can be depicted as in figure 1.
Sufficient conditions (in orange in figure 1) are applied: if the PIDs of the two records are the same (applying the condition function “pidMatch”), then the two records are duplicates. Otherwise, the algorithm proceeds with the next conditions (in yellow in figure 1);
If the titles of the two records contain numbers and these numbers are not the same, then the records are no duplicates (condition function “titleVersionMatch”);
If the two records contain different numbers of authors (condition function “sizeMatch” on metadata field “author”), then the records are no duplicates. If the two yellow conditions are satisfied, the algorithm proceeds with the last comparison (in blue in figure 1);
The titles of the two records are normalised and compared for similarity by applying the Levenstein distance algorithm (condition function called “LevenshteinTitle”). The algorithm returns a number in the range [0,1], where 0 means “very different” and 1 means “equal”. If the distance is greater than or equal 0,99 the two records are identified as duplicates.

Strategy for datasets

Strategy for software

Strategy for other types of research products

Clustering functions

NgramPairs
It produces a list of concatenations of a pair of ngrams generated from different words.
Example:
Input string: “Search for the Standard Model Higgs Boson”
Parameters: ngram length = 3
List of ngrams: “sea”, “sta”, “mod”, “hig”
Ngram pairs: “seasta”, “stamod”, “modhig”
SuffixPrefix
It produces ngrams pairs in a particular way: it concatenates the suffix of a string with the prefix of the next in the input string.
Example:
Input string: “Search for the Standard Model Higgs Boson”
Parameters: suffix and prefix length = 3
Output list: “ardmod” (suffix of the word “Standard” + prefix of the word “Model”), “rchsta” (suffix of the word “Search” + prefix of the word “Standard”)

Conditional functions
PidMatch
Compares two sets of persistent identifiers [type, value]. The condition is satisfied when the majority of PIDs are in common.
SizeMatch
Compares the number of occurrences of two repeatable fields. The condition is satisfied when the number matches.
TitleVersionMatch
Compares two titles. The condition is satisfied when the numbers (Arabic or Romans) contained in the title fields are the same.

TODOs

  • OpenAIRE entity identifier & PID mapping policy
  • Aggregation business logic by major sources:
    • Unpaywall integration
    • Crossref integration
    • ORCID integration
    • Cross cleaning actions: hostedBy patch
    • Scholexplorer business logic (relationship resolution)
    • DataCite
    • EuropePMC
    • more….
  • Deduplication business logic
    • For research outputs
    • For research organizations
  • Enrichment
    • Mining business logic
    • Deduction-based inference
    • Propagation business logic
  • Post-cleaning business logic
  • FAQ

Updated by Alessia Bardi about 3 years ago · 5 revisions