Project

General

Profile

OpenAIRE Research Graph » History » Revision 22

Revision 21 (Alessia Bardi, 05/11/2021 03:57 PM) → Revision 22/36 (Paolo Manghi, 05/11/2021 06:39 PM)

h1. The OpenAIRE Research Graph 

 The OpenAIRE Research Graph is one of the largest open scholarly record collections worldwide, key in fostering Open Science and establishing its practices in the daily research activities.  
 Conceived as a public and transparent good, populated out of data sources trusted by scientists, the Graph aims at bringing discovery, monitoring, and assessment of science back in the hands of the scientific community. 

 Imagine a vast collection of research products all linked together, contextualised and openly available. For the past ten years OpenAIRE has been working to gather this valuable record. It is a massive collection of metadata and links between scientific products such as articles, datasets, software, and other research products, entities like organisations, funders, funding streams, projects, communities, and data sources. 

 As of today, the OpenAIRE Research Graph aggregates around 450Mi metadata records with links collecting from 10K data sources trusted by scientists, including: 
 * Repositories registered in OpenDOAR or re3data.org 
 * Open Access journals registered in DOAJ 
 * Crossref 
 * Unpaywall 
 * ORCID 
 * Microsoft Academic Graph 
 * Datacite 

 After cleaning, deduplication, enrichment and full-text mining processes, the graph is analysed to produce statistics for OpenAIRE MONITOR (https://monitor.openaire.eu), the Open Science Observatory (https://osobservatory.openaire.eu), made discoverable via OpenAIRE EXPLORE (https://explore.openaire.eu) and programmatically accessible as described at https://develop.openaire.eu.  
 Json dumps are also published on Zenodo. 

 TODO: image of high-level data model (entities and semantic relationships, we can draw here: https://docs.google.com/drawings/d/1c4s7Pk2r9NgV_KXkmX6mwCKBIQ-yK3_m6xsxB-3Km1s/edit) 

 h2. Graph Data Dumps 

 In order to facilitate users, different dumps are available. All are available under the "Zenodo community called OpenAIRE Research Graph":https://zenodo.org/communities/openaire-research-graph. 
 Here we provide detailed documentation about the full dump: 

 * Json dump: https://doi.org/10.5281/zenodo.3516917 
 * Json schema: https://doi.org/10.5281/zenodo.4238938  

 [[Json schema]] 
 [[FAQ]] 

 

 h2. Graph provision processes 

 

 [[OpenAIRE entity identifier and PID mapping policy]] 

 

 h3. Aggregation business logic by major sources 

 DOIBoost is the intersection among Crossref, Unpaywall, Microsoft Academic Graph and ORCID 
 [[DOIBoost]] 

 [[Datacite]] 

 [[EuropePMC]] 

 The strategy for the resolution of links between publications and datasets is defined by Scholexplorer 
 [[Scholexplorer]] 

 

 h3. Deduplication business logic 

 h4. Deduplication business logic for research results  

 Metadata records about the same scholarly work can be collected from different providers. Each metadata record can possibly carry different information because, for example, some providers are not aware of links to projects, keywords or other details. Another common case is when OpenAIRE collects one metadata record from a repository about a pre-print and another record from a journal about the published article. For the provision of statistics, OpenAIRE must identify those cases and “merge” the two metadata records, so that the scholarly work is counted only once in the statistics OpenAIRE produces.  

 Duplicates among research results are identified among results of the same type (publications, datasets, software, other research products). If two duplicate results are aggregated one as a dataset and one as a software, for example, they will never be compared and they will never be identified as duplicates. 
 OpenAIRE supports different deduplication strategies based on the type of results. 

 *Methodology overview* 

 The deduplication process can be divided into two different phases:  
 * Candidate identification (clustering) 
 * Decision tree 
 * Creation of representative record 

 The implementation of each phase is different based on the type of results that are being processed. 


 *Strategy for publications* 

 _Candidate identification (clustering)_ 


 Clustering is a common heuristics used 

 Due to overcome the N x N complexity required high number of metadata records collected by OpenAIRE, it would not be feasible to match compute all pairs possible comparisons between all metadata records. 
 The goal of objects to identify the equivalent ones. The challenge this phase is to identify a clustering function that maximizes limit the chance of comparing only records that may lead to a match, while minimizing the number of comparisons by creating groups (or clusters) of records that will not are likely “similar”. Every record can be matched while being equivalent. Since the equivalence function is added to some level tolerant to minimal errors (e.g. switching more than one group.  
 The decision of characters inclusion in the title, or minimal difference in letters), we need this function to be not too precise (e.g. a hash of the title), but also not too flexible (e.g. random ngrams of the title). On the other hand, reality tells us that in some cases equality of two records can only be determined group is performed by their PIDs (e.g. DOI) as the metadata properties are very different across different versions and no 2 clustering function will ever bring them into the same cluster. To match these requirements OpenAIRE clustering for products works with two functions: 
 * DOI: the function generates the DOI when this is provided as part of the record properties; Lowercase: doi (in pid list and alternate identifiers list) 
 * Title-based function: the function generates a key that depends WordsStatsSuffixPrefixChain: suffixprefix with statistics on (i) number of significant words in the full title (normalized, stemming, etc.), (ii) module 10 of (number_of_words & number_of_letters%10) 
 Example:  
 If title is : “Search for the number of characters of such words, and (iii) a string obtained as an alternation of the Standard Model Higgs Boson” 
 The clustering function prefix(3) and suffix(3) (and vice versa) o produces 2 keys (i.e. adds the first 3 words (2 words if the title only has 2). For example, the title “Entity deduplication in big data graphs for scholarly communication” becomes “entity deduplication big data graphs scholarly communication” with publication to two keys key “7.1entionbig” and “7.1itydedbig” (where 1 is module 10 of 54 characters of the normalized title. 

 _Decision clusters): [5-3-seaardmod, 5-3-rchstadel] 


 _Desicision tree_ 

 For each pair of publications in a cluster the following strategy (depicted in the figure below) is applied. 
 Cross comparison of the pid lists (in the @pid@ and @alternateid@ elements). If 50% common pids, levenshtein distance on titles with low threshold (0.9). 
 Otherwise, check if the number of authors and the title version is equal. If so, levenshtein distance on titles with higher threshold (0.99).  
 The publications are matched as duplicate if the distance is higher than the threshold, in every other case they are considered as distinct publications. 

 !dedup-results.png! 

 _Creation of representative record_ 

 TODO 


 *Strategy for datasets* 

 *Strategy for software* 

 *Strategy for other types of research products* 

 *Clustering functions* 

 _NgramPairs_ 
 It produces a list of concatenations of a pair of ngrams generated from different words. 
 Example: 
 Input string: “Search for the Standard Model Higgs Boson” 
 Parameters: ngram length = 3 
 List of ngrams: “sea”, “sta”, “mod”, “hig” 
 Ngram pairs: “seasta”, “stamod”, “modhig” 

 _SuffixPrefix_ 
 It produces ngrams pairs in a particular way: it concatenates the suffix of a string with the prefix of the next in the input string. 
 Example: 
 Input string: “Search for the Standard Model Higgs Boson” 
 Parameters: suffix and prefix length = 3 
 Output list: “ardmod” (suffix of the word “Standard” + prefix of the word “Model”), “rchsta” (suffix of the word “Search” + prefix of the word “Standard”) 

 

 h3. TODOs 

 * OpenAIRE entity identifier & PID mapping policy (started, to be completed by Claudio and/or Michele DB) 
 * Aggregation business logic by major sources: 
 ** -Unpaywall integration- 
 ** -Crossref integration-  
 ** -ORCID integration- 
 ** -Cross cleaning actions: hostedBy patch- 
 ** Scholexplorer business logic (relationship resolution) 
 ** DataCite 
 ** EuropePMC 
 ** more…. 
 * Deduplication business logic (started, to be completed by Michele DB) 
 ** For research outputs ( -publications- , datasets, software, orp) 
 ** For research organizations  
 * Enrichment 
 ** Mining business logic 
 ** Deduction-based inference  
 ** Propagation business logic 
 * Post-cleaning business logic 
 * FAQ