연구
Xpertbench: Expert Level Tasks with Rubrics-Based Evaluation
arXiv:2604.02368v1 Announce Type: new Abstract: As Large Language Models LLMs exhibit plateauing performance on conventional benchmarks, a pivotal challenge persists: evaluating their proficiency in complex, openended tasks characterizing genuine expertlevel cognition.
이 콘텐츠는 ArXiv AI 원본 기사의 요약입니다. 전문은 원본 사이트에서 확인해주세요.
원문 기사 보기 →