Today's paper introduces MCEVAL, a massively multilingual code evaluation benchmark covering 40 programming languages with 16K test samples. It aims to facilitate the research of code LLMs in multilingual scenarios by providing challenging code completion, understanding, and generation evaluation tasks with finely curated massively multilingual instruction corpora.
McEval: Massively Multilingual Code Evaluation
McEval: Massively Multilingual Code…
McEval: Massively Multilingual Code Evaluation
Today's paper introduces MCEVAL, a massively multilingual code evaluation benchmark covering 40 programming languages with 16K test samples. It aims to facilitate the research of code LLMs in multilingual scenarios by providing challenging code completion, understanding, and generation evaluation tasks with finely curated massively multilingual instruction corpora.