•  
  •  
 

Author ORCID Identifier

0000-0002-1241-9015

DOI

10.22191/nejcs/vol5/iss1/7/

Abstract

Assessing the expertise of researchers has garnered increased interest recently. This heightened focus arises from the growing emphasis on interdisciplinary science and the subsequent need to form expert teams. When forming these teams, the coordinators need to assess expertise in fields that are often very different from theirs. The conventional reliance on signals of success, prestige, and academic impact can unintentionally perpetuate biases within the assessment process. This traditional approach favors senior researchers and those affiliated with prestigious institutions, potentially overlooking talented individuals from underrepresented backgrounds or institutions. This paper addresses the challenge of determining expertise by proposing a methodology that leverages the relevance of a researcher's recent publication track to the proposed research as a "sensemaking" signal. We introduce a novel {\em $\alpha-$relevance} metric between the trained embedding over the titles and abstracts of a researcher's recent publications and the embedding of a call and show that high values of {\em $\alpha-$relevance} indicate expertise in the field of the call. By evaluating the {\em $\alpha-$relevance} threshold, we establish a robust framework for the assessment process. For the evaluation process, we use (1) NIH grant-winning records and researchers' publications obtained from Scopus and (2) grant submissions dataset from a research university and the corresponding researchers' publications. Additionally, we investigate the optimal time window required to capture the researcher's expertise based on their publication timeline. Considering the temporal relationship between grant winnings and publications, we identify the most informative time window reflecting the researcher's relevant contributions.

The data-driven methodology transcends traditional signals of success, promoting a fair evaluation process of the researcher's relevance to the proposed research. By leveraging objective indicators, we aim to facilitate the formation of expert teams across disciplines while mitigating biases in assessing expertise.

Share

COinS