Author
Listed:
- Wooseok Kim
(School of Computing, Gachon University, 1342, Seongnam-daero, Sujeong-gu, Seongnam-si 13120, Republic of Korea)
- Gyunyeop Kim
(School of Computing, Gachon University, 1342, Seongnam-daero, Sujeong-gu, Seongnam-si 13120, Republic of Korea)
- Sangwoo Kang
(School of Computing, Gachon University, 1342, Seongnam-daero, Sujeong-gu, Seongnam-si 13120, Republic of Korea)
Abstract
Fusion-in-Decoder (FiD), a prominent retrieval-augmented generation model, has demonstrated outstanding performance in open-domain question answering by effectively leveraging multiple passages. However, processing multiple passages significantly increases computational costs at both encoder and decoder components. In particular, in Long-Form Question Answering (LFQA) scenarios, the decoder’s cross-attention computation scales proportionally with the length of the generated answer, severely impacting the overall inference speed. In this paper, we propose a novel dynamic token pruning mechanism to alleviate the computational bottleneck of the FiD decoder. Our method selectively identifies and removes tokens predicted to have low contributions to answer generation by jointly considering their contextual information and attention scores within the FiD encoder. The resulting pruned representations are then passed to the decoder, significantly reducing the cross-attention computations and thereby accelerating the overall inference process. Experimental evaluations on two LFQA benchmarks, ASQA and CLAPNQ, demonstrate that the proposed method achieves up to a 1.74-fold speed-up while maintaining minimal degradation in answer quality, effectively enhancing computational efficiency compared to the original FiD model.
Suggested Citation
Wooseok Kim & Gyunyeop Kim & Sangwoo Kang, 2025.
"Accelerating Inference in Retrieval-Augmented Generation Models for Long-Form Question Answering via Dynamic Token Pruning,"
Mathematics, MDPI, vol. 13(14), pages 1-18, July.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:14:p:2231-:d:1698089
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:14:p:2231-:d:1698089. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.