IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v407y2026ics0306261925020744.html

In-context learning enhanced large language model for robust distribution system state estimation

Author

Listed:
  • Li, Yue
  • Cheng, Gang
  • Zhao, Junbo
  • Liu, Yitong

Abstract

Existing data-driven distribution system state estimation (DSSE) methods face significant challenges in capturing useful information from massive zero injections that are prevalent in practical large-scale distribution systems. If zero-injection nodes are not explicitly utilized, limited real-time measurements can lead to a low observability problem. These methods are also vulnerable to bad data under heterogeneous data sources from different measurement units, such as advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and pseudo-measurements. This paper presents a proof-of-concept study of a large language model (LLM)-based DSSE method to address these challenges. The zero injections are transformed into textual content and are extracted through the self-attention mechanism of LLMs. The quantized low-rank adapter (QLoRA) and in-context learning (ICL) are utilized for efficient fine-tuning and quick adaptation, minimizing extensive weight adjustments across varied operational conditions. These strategies not only enhance the model’s scalability but also improve its adaptability and robustness to various situations. In particular, the self-attention mechanism allows the proposed method to deal with bad data effectively. The developed LLM-based method is evaluated against various data-driven approaches and the conventional weighted least squares (WLS) method on a realistic 2135-node Dominion Energy distribution feeder, which contains 60.98% zero-injection nodes. Specifically, incorporating zero-injection information reduces the voltage-magnitude mean absolute error (MAE) by 41.67% (from 0.0012 p.u. to 0.0007 p.u.), and under 10% bad data, the proposed method maintains a low MAE of 0.0049 p.u., compared with 0.0677 p.u. for the WLS method. These simulation results demonstrate the effectiveness and advantages of the proposed method under diverse measurement conditions and topology changes.

Suggested Citation

  • Li, Yue & Cheng, Gang & Zhao, Junbo & Liu, Yitong, 2026. "In-context learning enhanced large language model for robust distribution system state estimation," Applied Energy, Elsevier, vol. 407(C).
  • Handle: RePEc:eee:appene:v:407:y:2026:i:c:s0306261925020744
    DOI: 10.1016/j.apenergy.2025.127344
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925020744
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.127344?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:407:y:2026:i:c:s0306261925020744. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.