Abstract
While large language models (LLMs) have shown promising capabilities in biomedical applications, measuring their reliability in knowledge extraction remains a challenge. We developed a benchmark to compare LLMs in 11 literature knowledge extraction tasks that are foundational to automatic knowledgebase development, with or without task-specific examples supplied. We found large variation across the LLMs' performance, depending on the level of technical specialization, difficulty of tasks, scattering of original information, and format and terminology standardization requirements. We also found that asking the LLMs to provide the source text behind their answers is useful for overcoming some key challenges, but that specifying this requirement in the prompt is difficult.