IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-59628-y.html
   My bibliography  Save this article

Token-Mol 1.0: tokenized drug design with large language models

Author

Listed:
  • Jike Wang

    (Zhejiang University)

  • Rui Qin

    (Zhejiang University)

  • Mingyang Wang

    (Zhejiang University)

  • Meijing Fang

    (Zhejiang University)

  • Yangyang Zhang

    (Zhejiang University)

  • Yuchen Zhu

    (Zhejiang University)

  • Qun Su

    (Zhejiang University)

  • Qiaolin Gou

    (Zhejiang University)

  • Chao Shen

    (Zhejiang University)

  • Odin Zhang

    (University of Washington)

  • Zhenxing Wu

    (Zhejiang University)

  • Dejun Jiang

    (Zhejiang University)

  • Xujun Zhang

    (Zhejiang University)

  • Huifeng Zhao

    (Zhejiang University)

  • Jingxuan Ge

    (Zhejiang University)

  • Zhourui Wu

    (Tongji University)

  • Yu Kang

    (Zhejiang University)

  • Chang-Yu Hsieh

    (Zhejiang University)

  • Tingjun Hou

    (Zhejiang University)

Abstract

The integration of large language models (LLMs) into drug design is gaining momentum; however, existing approaches often struggle to effectively incorporate three-dimensional molecular structures. Here, we present Token-Mol, a token-only 3D drug design model that encodes both 2D and 3D structural information, along with molecular properties, into discrete tokens. Built on a transformer decoder and trained with causal masking, Token-Mol introduces a Gaussian cross-entropy loss function tailored for regression tasks, enabling superior performance across multiple downstream applications. The model surpasses existing methods, improving molecular conformation generation by over 10% and 20% across two datasets, while outperforming token-only models by 30% in property prediction. In pocket-based molecular generation, it enhances drug-likeness and synthetic accessibility by approximately 11% and 14%, respectively. Notably, Token-Mol operates 35 times faster than expert diffusion models. In real-world validation, it improves success rates and, when combined with reinforcement learning, further optimizes affinity and drug-likeness, advancing AI-driven drug discovery.

Suggested Citation

  • Jike Wang & Rui Qin & Mingyang Wang & Meijing Fang & Yangyang Zhang & Yuchen Zhu & Qun Su & Qiaolin Gou & Chao Shen & Odin Zhang & Zhenxing Wu & Dejun Jiang & Xujun Zhang & Huifeng Zhao & Jingxuan Ge , 2025. "Token-Mol 1.0: tokenized drug design with large language models," Nature Communications, Nature, vol. 16(1), pages 1-19, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-59628-y
    DOI: 10.1038/s41467-025-59628-y
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-59628-y
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-59628-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Noelia Ferruz & Steffen Schmidt & Birte Höcker, 2022. "ProtGPT2 is a deep unsupervised language model for protein design," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    2. Han Li & Ruotian Zhang & Yaosen Min & Dacheng Ma & Dan Zhao & Jianyang Zeng, 2023. "A knowledge-guided pre-training framework for improving molecular representation learning," Nature Communications, Nature, vol. 14(1), pages 1-13, December.
    3. Kehan Wu & Yingce Xia & Pan Deng & Renhe Liu & Yuan Zhang & Han Guo & Yumeng Cui & Qizhi Pei & Lijun Wu & Shufang Xie & Si Chen & Xi Lu & Song Hu & Jinzhi Wu & Chi-Kin Chan & Shawn Chen & Liangliang Z, 2024. "TamGen: drug design with target-aware molecule generation through a chemical language model," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Palistha Shrestha & Jeevan Kandel & Hilal Tayara & Kil To Chong, 2024. "Post-translational modification prediction via prompt-based fine-tuning of a GPT-2 model," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
    2. Veda Sheersh Boorla & Costas D. Maranas, 2025. "CatPred: a comprehensive framework for deep learning in vitro enzyme kinetic parameters," Nature Communications, Nature, vol. 16(1), pages 1-17, December.
    3. Kevin E. Wu & Kevin K. Yang & Rianne Berg & Sarah Alamdari & James Y. Zou & Alex X. Lu & Ava P. Amini, 2024. "Protein structure generation via folding diffusion," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    4. Gustavo Arango-Argoty & Elly Kipkogei & Ross Stewart & Gerald J. Sun & Arijit Patra & Ioannis Kagiampakis & Etai Jacob, 2025. "Pretrained transformers applied to clinical studies improve predictions of treatment efficacy and associated biomarkers," Nature Communications, Nature, vol. 16(1), pages 1-18, December.
    5. Amir Pandi & David Adam & Amir Zare & Van Tuan Trinh & Stefan L. Schaefer & Marie Burt & Björn Klabunde & Elizaveta Bobkova & Manish Kushwaha & Yeganeh Foroughijabbari & Peter Braun & Christoph Spahn , 2023. "Cell-free biosynthesis combined with deep learning accelerates de novo-development of antimicrobial peptides," Nature Communications, Nature, vol. 14(1), pages 1-14, December.
    6. Yue Wan & Jialu Wu & Tingjun Hou & Chang-Yu Hsieh & Xiaowei Jia, 2025. "Multi-channel learning for integrating structural hierarchies into context-dependent molecular representation," Nature Communications, Nature, vol. 16(1), pages 1-13, December.
    7. Timothy Atkinson & Thomas D. Barrett & Scott Cameron & Bora Guloglu & Matthew Greenig & Charlie B. Tan & Louis Robinson & Alex Graves & Liviu Copoiu & Alexandre Laterre, 2025. "Protein sequence modelling with Bayesian flow networks," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    8. Yang, Ying & Zhang, Wei & Lin, Hongyi & Liu, Yang & Qu, Xiaobo, 2024. "Applying masked language model for transport mode choice behavior prediction," Transportation Research Part A: Policy and Practice, Elsevier, vol. 184(C).
    9. Wenwu Zeng & Yutao Dou & Liangrui Pan & Liwen Xu & Shaoliang Peng, 2024. "Improving prediction performance of general protein language model by domain-adaptive pretraining on DNA-binding protein," Nature Communications, Nature, vol. 15(1), pages 1-18, December.
    10. Sijie Chen & Tong Lin & Ruchira Basu & Jeremy Ritchey & Shen Wang & Yichuan Luo & Xingcan Li & Dehua Pei & Levent Burak Kara & Xiaolin Cheng, 2024. "Design of target specific peptide inhibitors using generative deep learning and molecular dynamics simulations," Nature Communications, Nature, vol. 15(1), pages 1-20, December.
    11. Sophia Vincoff & Shrey Goel & Kseniia Kholina & Rishab Pulugurta & Pranay Vure & Pranam Chatterjee, 2025. "FusOn-pLM: a fusion oncoprotein-specific language model via adjusted rate masking," Nature Communications, Nature, vol. 16(1), pages 1-11, December.
    12. Xiaochu Tong & Ning Qu & Xiangtai Kong & Shengkun Ni & Jingyi Zhou & Kun Wang & Lehan Zhang & Yiming Wen & Jiangshan Shi & Sulin Zhang & Xutong Li & Mingyue Zheng, 2024. "Deep representation learning of chemical-induced transcriptional profile for phenotype-based drug discovery," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    13. Adibvafa Fallahpour & Vincent Gureghian & Guillaume J. Filion & Ariel B. Lindner & Amir Pandi, 2025. "CodonTransformer: a multispecies codon optimizer using context-aware neural networks," Nature Communications, Nature, vol. 16(1), pages 1-12, December.
    14. David Ding & Ada Y. Shaw & Sam Sinai & Nathan Rollins & Noam Prywes & David F. Savage & Michael T. Laub & Debora S. Marks, 2024. "Protein design using structure-based residue preferences," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    15. Jack Gallifant & Amelia Fiske & Yulia A Levites Strekalova & Juan S Osorio-Valencia & Rachael Parke & Rogers Mwavu & Nicole Martinez & Judy Wawira Gichoya & Marzyeh Ghassemi & Dina Demner-Fushman & Li, 2024. "Peer review of GPT-4 technical report and systems card," PLOS Digital Health, Public Library of Science, vol. 3(1), pages 1-15, January.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-59628-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.