Author
Listed:
- Rongjian Yang
(College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)
- Lixin Liu
(College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)
- Bin Han
(School of Communications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China)
- Feng Hu
(College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)
Abstract
In this article, we present a novel reinforcement learning-based framework in the discrete cosine transform to achieve better image steganography. First, the input image is divided into several blocks to extract semantic and structural features, evaluating their suitability for data embedding. Second, the Proximal Policy Optimization algorithm (PPO) is introduced in the block selection process to learn adaptive embedding policies, which effectively balances image fidelity and steganographic security. Moreover, the Deep Q-network (DQN) is used for adaptively adjusting the weights of the peak signal-to-noise ratio, structural similarity index, and detection accuracy in the reward formulation. Experimental results on the BOSSBase dataset confirm the superiority of our framework, achieving both lower detection rates and higher visual quality across a range of embedding payloads, particularly under low-bpp conditions.
Suggested Citation
Rongjian Yang & Lixin Liu & Bin Han & Feng Hu, 2025.
"Deep Reinforcement Learning-Based DCT Image Steganography,"
Mathematics, MDPI, vol. 13(19), pages 1-19, October.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:19:p:3150-:d:1763634
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3150-:d:1763634. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.