2025.02.11

Announcement: World Conference on Research Integrity (WCRI 2026)

Great news for everyone interested in research integrity! The World Conference on Research Integrity (WCRI 2026) will take place on May 3–6, 2026, at The Westin Bayshore in Vancouver, Canada. We are pleased to announce that the official conference website is now available. 

This year’s conference themes: 

  • Indigenous perspectives – the importance of cultural diversity and ethical principles in research. 
  • Artificial intelligence in research – ensuring ethical AI applications in scientific studies. 
  • Research security – safeguarding the reliability of data, information, and research integrity. 

Organizations interested in becoming conference sponsors or exhibition participants are invited to contribute to one of the most significant research events and enhance their visibility in the international arena. More details can be found on the official conference website: https://wcri2026.org/. 

Mark this date in your calendar and join the global community shaping the future of research integrity. 

Challenges in Research Publication Transparency: Another Wave of Mass Retractions

Retraction Watch reports that the academic publisher “Sage” has announced the retraction of 416 articles from the “Journal of Intelligent and Fuzzy Systems” (JIFS). This marks the second large-scale retraction in this journal, following the removal of over 450 publications in August 2023. This situation once again highlights the challenges in scholarly publishing, particularly regarding opaque peer review practices and unethical publication processes. 

According to the publisher, the retracted articles exhibited citation and referencing inconsistencies, incoherent or unrelated content, and unverified authors and reviewers. “Sage” acknowledged that these issues were not detected in time during the editorial and peer review process. Such cases underscore the need to enhance transparency in academic publishing, strengthen independent peer review systems, and combat so-called “paper mills” – entities that mass-produce low-quality or fabricated research papers. 

Automated detection tools, such as the “Problematic Paper Screener” (PPS), play a key role in identifying questionable publications. This algorithm, using the “Retraction Watch” database, flags papers citing previously retracted research. PPS creator Guillaume Cabanac has noted that this new influx of data will further improve “Feet of Clay”, a tool designed to detect academic misconduct. Currently, “Feet of Clay” has identified 716 problematic JIFS publications. 

“Sage” acknowledged that concerns about this journal were known even before acquiring the publishing rights from “IOS Press”, the former publisher of JIFS. However, “Sage” now emphasizes its ongoing efforts to ensure that all its journals meet high academic publishing standards and comply with the “Committee on Publication Ethics” (COPE) guidelines. 

This case once again reinforces the urgent need for stricter oversight of research publications, the promotion of responsible academic practices, and the implementation of ethical research standards. Technological solutions, such as AI-driven publication screening tools, are becoming increasingly important in ensuring the transparency, credibility, and ethical integrity of scientific research. 

The Office of the Ombudsperson for Academic Ethics and Procedures of the Republic of Lithuania highlights that transparent and responsible academic publishing is a fundamental factor in maintaining public trust in science and its impact on society. 

Source: Sage journal retracts another 400 papers – Retraction Watch 

Translated by Gabrielė Dambrauskaitė

2025.01.31

Academic Ethics and Artificial Intelligence: Discussions at the Office’s Seminars

In January 2025, the Office’s staff conducted five seminars for the communities of the Institute of the Lithuanian Language, the Lithuanian University of Health Sciences, and Vytautas Magnus University. In total, more than 300 participants attended the seminars. The sessions covered topics such as plagiarism, the use of artificial intelligence in research and studies, authorship in publications, and the objective assessment of group work.

Seminar participants raised various questions: What is more important when assessing students — their supervision or appealing to their personal responsibility for learning? How should a lecturer act if they suspect that a student has written a paper using artificial intelligence tools but lacks evidence to prove it? What learning and assessment methods are appropriate, and do current assignments develop the skills they should, given that artificial intelligence is now used for almost everything? What is self-plagiarism? How should one correctly cite their own work? Why, and is it necessary, to cite common phrases? What percentage of similarity is acceptable in a doctoral dissertation? Why is the division of a research study into multiple parts considered a violation if journals define publication length while the study itself involves a large amount of data? How should an author respond upon discovering their work used without proper citation in another person’s publication? What determines the order of authors in a publication, given that some arrange them alphabetically, while others do so based on contribution?

Issues of academic ethics are becoming increasingly relevant, especially in the era of artificial intelligence. The Office will continue striving to strengthen the academic community’s understanding of ethical practices in studies and research by organizing seminars and discussions.

2025.01.28

New Report on Fostering Trust in the Digital Age

A new report, “Fostering Trust in the Digital Age”, was published on January 10, 2025, and is available on the Zenodo platform.

The report features 27 scholarly papers prepared during the TrustOn2024 Workshop and a session at the Science Summit at the 79th United Nations General Assembly. The publication examines the challenges of disinformation and explores various strategies for strengthening trust in science and digital ecosystems.

This report provides valuable insights into contemporary issues and offers diverse solutions to address relevant topics. The collected findings are intended to inspire concrete actions and promote international collaboration.

More information can be found in the related blog post.

We encourage you to share this report with your communities and contribute to fostering trust in the digital age!

2025.01.17

Trust in Science Results According to the 2024 “Edelman Trust Barometer”

The Office presents the latest findings of the 2024 “Edelman Trust Barometer”, which reveal trends in public trust in science and scientists.
According to the survey data, scientists maintain the highest level of public trust, with as many as 74% of respondents expressing trust in them. Ironically, the same level of trust is attributed to the category “people like me,” highlighting that people particularly trust those with whom they feel a connection or similarity – a factor that plays an important role in shaping opinions about scientific achievements.

Trust in science forms the foundation of an informed society, where decisions are based on facts and scientific research. However, the “Edelman Trust Barometer” also reveals growing challenges, such as the spread of disinformation and skepticism toward new technologies, including artificial intelligence (AI).

Another important aspect related to trust in science is the impact of disinformation on the health sector. During a discussion organized by Dr. Yogi Hale Hendlin, titled “Disinformation and Trust in the Health Sector: What Are the Impacts and Ethical Challenges?”, three key areas undermining trust in science were emphasized:

  • The anti-vax movement, which diminishes the importance of vaccines and fosters skepticism;
  • Misinformation about the causes and treatments of diseases;
  • The rise of populism, which weakens trust in scientists and health sector experts.

The discussion also highlighted that social media and AI technologies significantly contribute to the speed of information dissemination and the magnitude of disinformation spread.

Based on the 2024 “Edelman Trust Barometer” results, it can be concluded that scientists remain among the most trusted figures in society. However, growing skepticism and disinformation underscore the need to further strengthen academic ethics, transparency, and science communication. This provides an opportunity not only to better inform the public but also to build a sustainable, fact-based partnership between science and society.

You can find more information here.

2024.12.27

Artificial Intelligence: Opportunities for Addressing Moral Questions

Recent studies reveal that artificial intelligence (AI), particularly large language models (LLMs) such as OpenAI’s ChatGPT, demonstrate moral reasoning capabilities comparable to those of specialists providing ethical or moral decision-making guidance. This advancement opens new possibilities while raising critical questions about how AI can be integrated into decision-making processes in academic and ethical contexts.

Research analyzing AI-generated moral recommendations and comparing them with the advice of renowned ethicists like Kwame Anthony Appiah has shown that models like GPT-4 can provide insights perceived as thoughtful, reliable, and precise. In some cases, AI even surpasses the recommendations offered by professionals in the field of moral consulting, sparking discussions about AI’s role in shaping ethical decisions, which are often grounded in moral principles.

One of the key advantages of AI lies in its ability to process vast amounts of ethical information and adapt to diverse moral traditions. For instance, LLMs can emulate the reasoning styles of various thinkers, offering insights aligned with different ethical frameworks. This versatility positions AI as a valuable tool for academic institutions grappling with complex ethical challenges.

However, the development of this technology also presents certain challenges. Experts caution against over-reliance on AI, particularly due to potential biases in training data and limited alignment with non-Western cultural perspectives. Additionally, the persuasive nature of AI-generated moral advice carries a risk of manipulation, especially when addressing sensitive or controversial topics.

Researchers are also exploring new capabilities of LLMs, including their tendency for deception and their ability to simulate a “theory of mind.” These capabilities highlight the need for continuous monitoring to ensure responsible and transparent use of AI tools.

The academic community stands at a crossroads: AI could become a powerful ally in moral reasoning, yet it also introduces new challenges. Successfully integrating these technologies requires an interdisciplinary approach that combines insights from ethics, psychology, and AI research to uphold principles of justice, transparency, and accountability.

Moving forward, AI tools have the potential to transform the way ethical questions are addressed not only in academia but also beyond. Whether as advisors, educators, or even critics, these technologies could deepen our understanding of moral decision-making processes.

Source: AI Chatbots Seem as Ethical as a New York Times Advice Columnist | Scientific American

2024.12.19

We warmly greet everyone as the most wonderful time of the year approaches

May this festive season bring peace, joy, and inspiration for new ideas. Let 2025 be a year of discovery, sincere collaboration, and meaningful achievements!

We invite you to participate in a survey on academic freedom, ethical culture, and experiences

The Office of the Ombudsperson for Academic Ethics and Procedures is launching an important initiative: a survey for the academic community of Lithuanian higher education and research institutions. This survey seeks to better understand the current state of academic freedom and ethical culture, as well as to assess various experiences related to these issues.

We kindly invite you to contribute to this initiative by completing the online survey. Your valuable feedback will help shape policies and strategies aimed at fostering an ethical and free academic environment.

Survey link: https://forms.office.com/e/wCTqfSajtw
Survey deadline: January 25, 2025

Let’s work together to create an ethical and free academic environment!

2024.12.10

The Ombudsperson Discussed Opportunities to Strengthen Academic Ethics with the Chair of the Committee on Education and Science

Today, the Ombudsperson for Academic Ethics and Procedures of the Republic of Lithuania, Dr. Reda Cimmperman, participated in a meeting with the Chair of the Committee on Education and Science of the Seimas, Vaida Aleknavičienė. The discussion focused on ensuring academic ethics, fostering collaboration, and implementing strategic initiatives.

Presenting the activities of the Office of the Ombudsperson for Academic Ethics and Procedures, Dr. Cimmperman highlighted the key priorities, ongoing initiatives, and the Office’s contribution to promoting transparency and integrity in education and research. The meeting also included discussions on the draft Government Program of the Republic of Lithuania, emphasizing its importance for enhancing the academic community’s activities.

Chair Vaida Aleknavičienė expressed her support for initiatives aimed at strengthening ethics, emphasizing the importance of transparency in research and the practical application of research outcomes. She underscored that transparency and integrity are essential for the development of Lithuania’s education and science sectors.

During the meeting, the guidelines on the ethical use of artificial intelligence in education and research processes, prepared by the Office of the Ombudsperson for Academic Ethics and Procedures, were introduced. These guidelines aim to ensure responsible applications of artificial intelligence, contributing to innovation and progress within Lithuania’s science system.

The issues discussed today underscored a shared commitment to fostering an ethical, transparent, and responsible academic community in Lithuania. The ideas expressed by the Ombudsperson and the Chair of the Committee on Education and Science, along with the collaborative approaches discussed, open new opportunities to advance the highest ethical standards and strengthen trust in the country’s science system.

Photo from the Office archive

2024.12.05

Ombudsperson Dr. Reda Cimmperman Appointed as Lithuania’s Representative to the ETINED Platform: A Significant Step Toward Strengthening Ethics and Transparency in the Academic Community

Dr. Reda Cimmperman, the Chief Ombudsperson for Academic Ethics and Procedures of the Republic of Lithuania, has been appointed by the Ministry of Education, Science, and Sport of the Republic of Lithuania as the country’s representative to the Council of Europe’s Platform on Ethics, Transparency, and Integrity in Education (ETINED). She has been entrusted with representing Lithuania’s interests, fostering leadership in this field, and promoting the highest ethical standards at both the national and European levels.

“The appointment to represent Lithuania on the ETINED platform is a significant responsibility and honor. Education is the cornerstone of a just and democratic society, and its transparency is a fundamental condition. Lithuania is firmly committed to collaborating with European partners to develop solutions that foster transparency, trust, and integrity within the academic community,” stated Dr. Cimmperman.

This appointment underscores Lithuania’s strong commitment to promoting transparency and integrity in the education and research system. Representing Lithuania on the ETINED platform, Dr. Cimmperman will actively contribute to the organization’s initiatives and, alongside other ETINED members, will strive to:

  • Develop mechanisms ensuring the reliability of qualifications and safeguarding student rights,
  • Implement measures to combat dishonest education service providers and ensure institutional accountability,
  • Promote artificial intelligence solutions that effectively combat fraud while adhering to ethical principles,
  • Organize seminars and integrate academic transparency into educational programs to enhance awareness among students and educators.

For the first time in her new role, Dr. Cimmperman participated in the 8th ETINED Annual Meeting held on November 26–27, 2024, at Guglielmo Marconi University in Rome, Italy. The meeting addressed the major challenges faced by European countries in 2024, shared best practices, and outlined goals for the coming year in the field of ethics.

During the meeting, discussions highlighted the financial, psychological, and systemic impacts of fraud on students, educators, and institutions. Participants explored strategies for ensuring transparency and building trust. The complexities of balancing quality assurance with the growing demand for international academic programs and the risks posed by unregulated or unaccredited educational institutions were also discussed. Furthermore, research findings were presented, revealing limited student understanding of academic fraud and emphasizing the importance of training to address this issue.

Established in October 2015, ETINED’s mission is to share information and best practices, promote the application of ethical behavior principles at all levels of education, and foster an ethical culture. The platform seeks to ensure that quality education is grounded in the principles of democracy, participation, and ethics. Corruption prevention is approached not only through legal mechanisms but also by strengthening moral values. ETINED encourages representatives to commit to positive ethical principles, recognizing that only collective efforts can create a sustainable and fair academic environment.

For more information, visit the official ETINED website: www.coe.int.