IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i10p365-d1493535.html
   My bibliography  Save this article

Large Language Models Meet Next-Generation Networking Technologies: A Review

Author

Listed:
  • Ching-Nam Hang

    (Yam Pak Charitable Foundation School of Computing and Information Sciences, Saint Francis University, Hong Kong, China)

  • Pei-Duo Yu

    (Department of Applied Mathematics, Chung Yuan Christian University, Taoyuan City 320314, Taiwan)

  • Roberto Morabito

    (Communication Systems Department, EURECOM, 06140 Biot, France)

  • Chee-Wei Tan

    (College of Computing and Data Science, Nanyang Technological University, Singapore 639798, Singapore)

Abstract

The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems.

Suggested Citation

  • Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:10:p:365-:d:1493535
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/10/365/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/10/365/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Publisher Correction: Large language models encode clinical knowledge," Nature, Nature, vol. 620(7973), pages 19-19, August.
    2. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    3. Zoltán Szabó & Vilmos Bilicki, 2023. "A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection," Future Internet, MDPI, vol. 15(10), pages 1-27, September.
    4. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Large language models encode clinical knowledge," Nature, Nature, vol. 620(7972), pages 172-180, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhenjia Chen & Zhenyuan Lin & Ji Yang & Cong Chen & Di Liu & Liuting Shan & Yuanyuan Hu & Tailiang Guo & Huipeng Chen, 2024. "Cross-layer transmission realized by light-emitting memristor for constructing ultra-deep neural network with transfer learning ability," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    2. Yujin Oh & Sangjoon Park & Hwa Kyung Byun & Yeona Cho & Ik Jae Lee & Jin Sung Kim & Jong Chul Ye, 2024. "LLM-driven multimodal target volume contouring in radiation oncology," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    3. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    4. Juexiao Zhou & Xiaonan He & Liyuan Sun & Jiannan Xu & Xiuying Chen & Yuetan Chu & Longxi Zhou & Xingyu Liao & Bin Zhang & Shawn Afvari & Xin Gao, 2024. "Pre-trained multimodal large language model enhances dermatological diagnosis using SkinGPT-4," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    5. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    6. Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
    7. Hoyoung Lee & Youngsoo Choi & Yuhee Kwon, 2024. "Quantifying Qualitative Insights: Leveraging LLMs to Market Predict," Papers 2411.08404, arXiv.org.
    8. Zhaofeng Zhang & Banghao Chen & Shengxin Zhu & Nicolas Langren'e, 2024. "Quantformer: from attention to profit with a quantitative transformer trading strategy," Papers 2404.00424, arXiv.org, revised Oct 2024.
    9. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    10. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    11. Adria Pop & Jan Sporer & Siegfried Handschuh, 2024. "The Structure of Financial Equity Research Reports -- Identification of the Most Frequently Asked Questions in Financial Analyst Reports to Automate Equity Research Using Llama 3 and GPT-4," Papers 2407.18327, arXiv.org.
    12. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    13. Masanori Hirano & Kentaro Imajo, 2024. "Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training," Papers 2404.10555, arXiv.org.
    14. Baptiste Lefort & Eric Benhamou & Jean-Jacques Ohana & David Saltiel & Beatrice Guez, 2024. "Optimizing Performance: How Compact Models Match or Exceed GPT's Classification Capabilities through Fine-Tuning," Papers 2409.11408, arXiv.org.
    15. Zhiyu Cao & Zachary Feinstein, 2024. "Large Language Model in Financial Regulatory Interpretation," Papers 2405.06808, arXiv.org, revised Jul 2024.
    16. Christopher J. Lynch & Erik J. Jensen & Virginia Zamponi & Kevin O’Brien & Erika Frydenlund & Ross Gore, 2023. "A Structured Narrative Prompt for Prompting Narratives from Large Language Models: Sentiment Assessment of ChatGPT-Generated Narratives and Real Tweets," Future Internet, MDPI, vol. 15(12), pages 1-36, November.
    17. Alejandro Lopez-Lira & Yuehua Tang, 2023. "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models," Papers 2304.07619, arXiv.org, revised Sep 2024.
    18. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    19. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    20. Frank Xing, 2024. "Designing Heterogeneous LLM Agents for Financial Sentiment Analysis," Papers 2401.05799, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:10:p:365-:d:1493535. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.