Author
Abstract
Objective. To synthesise current editorial policies governing the use of generative artificial intelligence (AI) models in scholarly publishing and to identify unresolved issues requiring further guidance and evidence.Methods. A descriptive review of peer-reviewed publications (2023–2025) and openly accessible policies issued by publishers and journals was conducted.Results. In the absence of a unified international standard, major global and Russian publishers have issued role-specific guidance for authors, reviewers, and editors on interacting with generative AI (e. g., GPT-class models). Areas of emerging consensus include: AI systems are not recognised as authors; accountability for content resides exclusively with human contributors; and the use and role of AI must be transparently disclosed. Notable heterogeneity persists in the boundaries of permitted practices, ranging from non-binding «fair-use» recommendations to formal checklists and mandatory disclosure fields embedded in editorial management systems. Guidance is most developed for authors and editors, whereas rules for reviewers are comparatively sparse. Disciplinary variation is evident in both the permissiveness and specificity of recommended practices.Research gaps. There is no industry-wide consensus on acceptable uses of generative AI in research reporting or editorial workflows. Empirical evidence remains limited regarding the impact of generative AI on manuscript quality, the integrity and efficiency of peer review, and reader perception. Standards for provenance tracking and durable recording of AI-generated content are under-specified, and documented retractions explicitly involving AI-generated manuscripts are rare.Conclusions. While norms around authorship, responsibility, and disclosure are converging, operationalisation across journals and disciplines is inconsistent. Coordinated standard-setting and rigorous empirical studies are needed to evaluate risks and benefits and to support evidencebased policy.
Suggested Citation
V. A. Vasileva, 2026.
"Between the Scylla of Prohibition and the Charybdis of Permissiveness: Journal Editorial Strategies in the Age of Generative AI Models,"
Administrative Consulting, Russian Presidential Academy of National Economy and Public Administration. North-West Institute of Management., issue 6.
Handle:
RePEc:acf:journl:y:2026:id:2876
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:acf:journl:y:2026:id:2876. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://sziu.ranepa.ru .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.