IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v644y2025i8078d10.1038_s41586-025-09446-5.html
   My bibliography  Save this article

Optical generative models

Author

Listed:
  • Shiqi Chen

    (University of California Los Angeles
    University of California Los Angeles
    University of California Los Angeles)

  • Yuhang Li

    (University of California Los Angeles
    University of California Los Angeles
    University of California Los Angeles)

  • Yuntian Wang

    (University of California Los Angeles
    University of California Los Angeles
    University of California Los Angeles)

  • Hanlong Chen

    (University of California Los Angeles
    University of California Los Angeles
    University of California Los Angeles)

  • Aydogan Ozcan

    (University of California Los Angeles
    University of California Los Angeles
    University of California Los Angeles)

Abstract

Generative models cover various application areas, including image and video synthesis, natural language processing and molecular design, among many others1–11. As digital generative models become larger, scalable inference in a fast and energy-efficient manner becomes a challenge12–14. Here we present optical generative models inspired by diffusion models4, where a shallow and fast digital encoder first maps random noise into phase patterns that serve as optical generative seeds for a desired data distribution; a jointly trained free-space-based reconfigurable decoder all-optically processes these generative seeds to create images never seen before following the target data distribution. Except for the illumination power and the random seed generation through a shallow encoder, these optical generative models do not consume computing power during the synthesis of the images. We report the optical generation of monochrome and multicolour images of handwritten digits, fashion products, butterflies, human faces and artworks, following the data distributions of MNIST15, Fashion-MNIST16, Butterflies-10017, Celeb-A datasets18, and Van Gogh’s paintings and drawings19, respectively, achieving an overall performance comparable to digital neural-network-based generative models. To experimentally demonstrate optical generative models, we used visible light to generate images of handwritten digits and fashion products. In addition, we generated Van Gogh-style artworks using both monochrome and multiwavelength illumination. These optical generative models might pave the way for energy-efficient and scalable inference tasks, further exploiting the potentials of optics and photonics for artificial-intelligence-generated content.

Suggested Citation

  • Shiqi Chen & Yuhang Li & Yuntian Wang & Hanlong Chen & Aydogan Ozcan, 2025. "Optical generative models," Nature, Nature, vol. 644(8078), pages 903-911, August.
  • Handle: RePEc:nat:nature:v:644:y:2025:i:8078:d:10.1038_s41586-025-09446-5
    DOI: 10.1038/s41586-025-09446-5
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-025-09446-5
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-025-09446-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:644:y:2025:i:8078:d:10.1038_s41586-025-09446-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.