Author
Listed:
- Ilia Karpov
- Alexander Kirillovich
- Elisaveta Goncharova
- Andrey Parinov
- Alexander Chernyavskiy
- Dmitry Ilvovsky
- Natalia Semenova
- Artyom Sosedka
- Ekaterina Lisitsyna
- Mikhail Belkin
Abstract
Large language models (LLMs) offer significant potential for constructing commonsense knowledge graphs from text, demonstrating adaptability across diverse domains. However, their effectiveness varies significantly with domain-specific language, highlighting a critical need for specialized benchmarks to assess and optimize knowledge graph construction sub-tasks like named entity recognition, relation extraction, and entity linking. Currently, domain-specific benchmarks are scarce. To address this gap, we introduce SynEL, a novel benchmark developed for evaluating text-based knowledge extraction methods, validated using customer support dialogues. We present a comprehensive methodology for benchmark construction, propose two distinct approaches for generating synthetic datasets, and evaluate accumulated hallucinations. Our experiments reveal that existing LLMs experience a significant performance drop, with micro-F1 scores decreasing by up to 25 absolute points when extracting low-resource entities compared to high-resource entities from sources like Wikipedia. Furthermore, by incorporating synthetic datasets into the training process, we achieved an improvement in micro-F1 scores of up to 10 absolute points. We publicly release our benchmark and generation code to demonstrate its utility for fine-tuning and evaluating LLMs.
Suggested Citation
Ilia Karpov & Alexander Kirillovich & Elisaveta Goncharova & Andrey Parinov & Alexander Chernyavskiy & Dmitry Ilvovsky & Natalia Semenova & Artyom Sosedka & Ekaterina Lisitsyna & Mikhail Belkin, 2026.
"SynEL: A synthetic benchmark for entity linking,"
PLOS ONE, Public Library of Science, vol. 21(1), pages 1-18, January.
Handle:
RePEc:plo:pone00:0339468
DOI: 10.1371/journal.pone.0339468
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0339468. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.