AI policies
Edelweiss Applied Science and Technology (EAST) acknowledges the growing role of generative AI tools in academic research, writing, and publishing. To ensure ethical conduct, transparency, and scholarly integrity, EAST has established the following policy on the appropriate use of AI-assisted technologies. This policy is informed by global best practices, including those adopted by Elsevier and the Committee on Publication Ethics (COPE).
- AI Tools Cannot Be Listed as Authors
Generative AI tools, such as ChatGPT, Gemini, Claude, or similar technologies, cannot be credited as authors under any circumstance. Authorship requires the ability to make meaningful intellectual contributions, accept accountability for the work, and uphold ethical and academic standards. Since AI tools lack human agency, they are not capable of fulfilling these responsibilities. Therefore, only individuals who meet established authorship criteria may be listed as authors on EAST publications.
- Disclosure of AI Use is Mandatory
Authors are required to clearly disclose any use of generative AI or AI-assisted technologies during the research and manuscript preparation process. This includes, but is not limited to, the use of AI for writing or editing text, translating content, generating images or figures, analyzing data, or suggesting references. All such uses must be transparently documented in a dedicated section of the manuscript, such as the Acknowledgements or Methods section.
An example of a suitable disclosure is:
“The authors used OpenAI’s ChatGPT to edit and refine the wording of the Introduction. All outputs were reviewed and verified by the authors.”
- Human Oversight and Accountability Are Essential
Although AI tools may assist in certain aspects of manuscript development, final responsibility for the content lies entirely with the human authors. Authors must carefully review, edit, and validate any AI-generated or AI-assisted material to ensure it is factually correct, original, and aligned with ethical research practices. EAST holds authors fully accountable for any errors, misrepresentations, or ethical breaches resulting from the use of AI technologies.
- Misuse of AI is Strictly Prohibited
EAST maintains a zero-tolerance policy regarding the misuse of AI. The following practices are explicitly prohibited:
- Fabricating or falsifying research data or results using AI tools.
- Generating incorrect, fake, or non-verifiable citations.
- Manipulating images, graphs, or visual data in misleading or unethical ways.
- Using AI to impersonate reviewers, generate fraudulent peer reviews, or compromise the editorial process.
If any form of misuse is discovered—whether before or after publication—the manuscript will be rejected or retracted, and appropriate actions will be taken, including notifying relevant institutions or authorities.
- Reviewer Use of AI Requires Editorial Permission
Peer reviewers are not permitted to use generative AI tools to draft or edit manuscript reviews unless explicit permission is granted by the editor. Where such permission is given, reviewers must ensure that the confidentiality of the manuscript is fully protected. Reviewers are also expected to take full personal responsibility for the content of the review, regardless of any AI assistance. The use of AI must be disclosed to the editor as part of the review submission.
- Additional Guidance and Evolving Best Practices
EAST encourages authors, reviewers, and editors to remain informed about the ethical implications of AI in scholarly publishing. For further guidance, individuals are invited to consult the following resources:
This policy will be reviewed and updated regularly to reflect emerging standards, technologies, and ethical considerations in academic publishing.