IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v18y2025i15p3982-d1710128.html
   My bibliography  Save this article

ELM-Bench: A Multidimensional Methodological Framework for Large Language Model Evaluation in Electricity Markets

Author

Listed:
  • Hang Fan

    (School of Economics and Management, North China Electric Power University, Beijing 100000, China)

  • Shijie Ji

    (Beijing Power Exchange Center Co., Ltd., Beijing 100000, China)

  • Peng Yuan

    (State Grid LiaoNing Electric Power Supply Co., Ltd., Electric Power Research Institute, Shenyang 110000, China)

  • Qingsong Zhao

    (State Grid LiaoNing Electric Power Supply Co., Ltd., Electric Power Research Institute, Shenyang 110000, China)

  • Shuaikang Wang

    (School of Economics and Management, North China Electric Power University, Beijing 100000, China)

  • Xiaowei Tan

    (School of Economics and Management, North China Electric Power University, Beijing 100000, China)

  • Yunjie Duan

    (School of Economics and Management, North China Electric Power University, Beijing 100000, China)

Abstract

The large language model (LLM) has significant potential for application in the field of electricity markets, but there are shortcomings in professional evaluation methods for LLM: single task, limited dataset coverage, and lack of depth. To this end, this article proposes the ELM-Bench framework for evaluating the LLM of the Chinese electricity market, which evaluates the model from 3 dimensions of understanding, generation, and safety through 7 tasks (such as common-sense Q&A and terminology explanations) with 2841 samples. At the same time, a specialized domain model QwenGOLD was fine-tuned based on the general LLM. The evaluation results show that the top-level general model performs well in general tasks due to high-quality pre-training, while QwenGOLD performs better in tasks such as prediction and decision-making in professional fields, verifying the effectiveness of domain fine-tuning. The study also found that fine-tuning has limited improvement on LLM’s basic abilities, but its score in professional prediction tasks is second only to Deepseek-V3, indicating that some general LLMs can handle domain data well without professional training. This can provide a basis for model selection in different scenarios, balancing performance and training costs.

Suggested Citation

  • Hang Fan & Shijie Ji & Peng Yuan & Qingsong Zhao & Shuaikang Wang & Xiaowei Tan & Yunjie Duan, 2025. "ELM-Bench: A Multidimensional Methodological Framework for Large Language Model Evaluation in Electricity Markets," Energies, MDPI, vol. 18(15), pages 1-23, July.
  • Handle: RePEc:gam:jeners:v:18:y:2025:i:15:p:3982-:d:1710128
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/18/15/3982/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/18/15/3982/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    2. Keramatinejad, Mahdi & Karbasian, Mahdi & Alimohammadi, Hamidreza & Atashgar, Karim, 2025. "A hybrid approach of adaptive surrogate model and sampling method for reliability assessment in multidisciplinary design optimization," Reliability Engineering and System Safety, Elsevier, vol. 261(C).
    3. Zhang, Liang & Chen, Zhelun, 2025. "Opportunities of applying Large Language Models in building energy sector," Renewable and Sustainable Energy Reviews, Elsevier, vol. 214(C).
    4. Li, Caixia & Xu, Yuanyuan & Xie, Minglang & Zhang, Pengfei & Zhang, Bohan & Xiao, Bo & Zhang, Sujun & Liu, Ziheng & Zhang, Wenjie & Hao, Xiaojing, 2025. "Assessing solar-to-PV power conversion models: Physical, ML, and hybrid approaches across diverse scales," Energy, Elsevier, vol. 323(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Mohammed-Khalil Ghali & Cecil Pang & Oscar Molina & Carlos Gershenson-Garcia & Daehan Won, 2025. "Forecasting Commodity Price Shocks Using Temporal and Semantic Fusion of Prices Signals and Agentic Generative AI Extracted Economic News," Papers 2508.06497, arXiv.org.
    2. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    3. Xia Li & Hanghang Zheng & Xiao Chen & Hong Liu & Mao Mao, 2025. "Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring," Papers 2501.10677, arXiv.org, revised Jan 2025.
    4. Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
    5. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    6. Yijia Xiao & Edward Sun & Tong Chen & Fang Wu & Di Luo & Wei Wang, 2025. "Trading-R1: Financial Trading with LLM Reasoning via Reinforcement Learning," Papers 2509.11420, arXiv.org.
    7. Meyer, Julian Anton, 2025. "Success factors and development areas for the implementation of Generative AI in companies," Junior Management Science (JUMS), Junior Management Science e. V., vol. 10(1), pages 1-23.
    8. Frank Xing, 2024. "Designing Heterogeneous LLM Agents for Financial Sentiment Analysis," Papers 2401.05799, arXiv.org.
    9. Ankur Sinha & Chaitanya Agarwal & Pekka Malo, 2025. "FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data," Papers 2502.18471, arXiv.org.
    10. Hoyoung Lee & Youngsoo Choi & Yuhee Kwon, 2024. "Quantifying Qualitative Insights: Leveraging LLMs to Market Predict," Papers 2411.08404, arXiv.org.
    11. Seppälä, Timo & Mucha, Tomasz & Mattila, Juri, 2023. "Beyond AI, Blockchain Systems, and Digital Platforms: Digitalization Unlocks Mass Hyper-Personalization and Mass Servitization," ETLA Working Papers 106, The Research Institute of the Finnish Economy.
    12. Zhaofeng Zhang & Banghao Chen & Shengxin Zhu & Nicolas Langren'e, 2024. "Quantformer: from attention to profit with a quantitative transformer trading strategy," Papers 2404.00424, arXiv.org, revised Aug 2025.
    13. Shengkun Wang & Taoran Ji & Linhan Wang & Yanshen Sun & Shang-Ching Liu & Amit Kumar & Chang-Tien Lu, 2024. "StockTime: A Time Series Specialized Large Language Model Architecture for Stock Price Prediction," Papers 2409.08281, arXiv.org.
    14. Zhang, Liang & Chen, Zhelun, 2025. "Opportunities of applying Large Language Models in building energy sector," Renewable and Sustainable Energy Reviews, Elsevier, vol. 214(C).
    15. Sharique Hasan & Alexander Oettl & Sampsa Samila, 2025. "From Model Design to Organizational Design: Complexity Redistribution and Trade-Offs in Generative AI," Papers 2506.22440, arXiv.org.
    16. Issa Sugiura & Takashi Ishida & Taro Makino & Chieko Tazuke & Takanori Nakagawa & Kosuke Nakago & David Ha, 2025. "EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements," Papers 2506.08762, arXiv.org.
    17. Tianyu Zhou & Pinqiao Wang & Yilin Wu & Hongyang Yang, 2024. "FinRobot: AI Agent for Equity Research and Valuation with Large Language Models," Papers 2411.08804, arXiv.org.
    18. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    19. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    20. Lars Hornuf & David J. Streich & Niklas Töllich, 2025. "Making GenAI Smarter: Evidence from a Portfolio Allocation Experiment," CESifo Working Paper Series 11862, CESifo.

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:15:p:3982-:d:1710128. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.