Author
Listed:
- Alexander Amigud
(Faculty of Management, International Business University, Toronto, ON M5S 2V1, Canada)
- David J. Pell
(Faculty of Arts and Social Sciences (FASS), The Open University, Milton Keynes MK7 6AA, UK)
Abstract
The emergence of generative AI has caused a major dilemma—as higher education institutions prepare students for the workforce, the development of digital skills must become a normative aim, while simultaneously preserving academic integrity and credibility. The challenge they face is not simply a matter of using AI responsibly but typically of reconciling two opposing duties: (A) preparing students for the future of work, and (B) maintaining the traditional role of developing personal academic skills, such as critical thinking, the ability to acquire knowledge, and the capacity to produce original work. Higher education institutions must typically balance these objectives while addressing financial considerations, creating value for students and employers, and meeting accreditation requirements. Against this need, this multiple-case study of fifty universities across eight countries examined institutional response to generative AI. The content analysis revealed apparent confusion and a lack of established best practices, as proposed actions varied widely, from complete bans on generated content to the development of custom AI assistants for students and faculty. Oftentimes, the onus fell on individual faculty to exercise discretion in the use of AI, suggesting an inconsistent application of academic policy. We conclude by recognizing that time and innovation will be required for the apparent confusion of higher education institutions in responding to this challenge to be resolved and suggest some possible approaches to that. Our results, however, suggest that their top concern now is the potential for irresponsible use of AI by students to cheat on assessments. We, therefore, recommend that, in the short term, and likely in the long term, the credibility of awards is urgently safeguarded and argue that this could be achieved by ensuring at least some human-proctored assessments are integrated into courses, e.g., in the form of real-location examinations and viva voces.
Suggested Citation
Alexander Amigud & David J. Pell, 2025.
"Responsible and Ethical Use of AI in Education: Are We Forcing a Square Peg into a Round Hole?,"
World, MDPI, vol. 6(2), pages 1-18, June.
Handle:
RePEc:gam:jworld:v:6:y:2025:i:2:p:81-:d:1671331
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jworld:v:6:y:2025:i:2:p:81-:d:1671331. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.