GenAI: responsible use in research

What is it? 

Generative Artificial Intelligence (GenAI) is a subfield of Artificial Intelligence (AI) that focuses on creating content (e.g. text, images, sounds, 3D models, code, …).

It also makes a wide range of (other) applications available, many of which impact scientific research. Among other applications, it can help organize your thoughts, provide feedback on work, assist with coding, summarize research literature, improve or translate existing text, etc.

The large language models (LLM) behind GenAI use an advanced learning algorithm (neural network) to absorb large amounts of information and data scraped from the web. That way the model learns how language looks and works. It then generates a human-like text response to a user’s question or ‘prompt’, similar to that on which the model was trained.

With the release of ChatGPT in November 2022, and subsequently various similar models, GenAI is known to a mass audience and now has millions of users.

GenAI tools have become a part of many professionals’ workflows and they will continue to do so in our future careers, also in research. It is essential to establish guidance to ensure that these models are properly used in a responsible way.

Scope of this tip

Because of the rapid technological developments of these models, guidance is not a given. Moreover, the applications, both by model (ChatGPT, DALL-E, Midjourney, Bing, BARD, Perplexity, …)  and use of these models (across different disciplines, different goals and therefore different outcomes), are too widespread to capture all dynamics. It requires our institution and all of its researchers to pay constant attention to new developments.

This research tip focuses only on the impact of GenAI within (aspects of) research. Additionally, this is an initial form of guidance (dd January 2024), which ensures that the scope of the tip is limited and by no means answers all the questions researchers face. Advancing insight and the impact of new developments will guide subsequent versions. Please check these regularly.

For an overview of what ChatGPT can and cannot do, including the impact on education, see Education Tip ChatGPT: a Generative AI System with an Impact on Ghent University Education.

General principles

  • explore the many possibilities AND limitations of GenAI tools;
  • remain critical of each tool, both in terms of technical possibilities, ethical implications, ...  ;
  • continue to broaden and deepen the knowledge on how these models/tools work;
  • learn how to use them properly and;
  • evaluate and monitor the quality of the results generated,
  • independent of research field or discipline, career phase, status, ... . 


To do so, all actions should be guided by following principles:

  • Transparency: researchers are open about the use of GenAI no matter how the output is used, for example directly or as a source of inspiration. They share the details, just as we expect scholars to do with other software, tools and methodologies.
  • Accountability: researchers are solely responsible for any use of GenAI and the (quality of the) output generated, and can be held to this responsibility.
  • Rigor: researchers take necessary precautions to use GenAI tools correctly and check the results generated. This implies a good understanding of the (technical) capabilities of the tool and the (ethical) implications of its use.
  • Integrity: GenAI is not used to infringe on research integrity in any way. All precautions are taken to avoid undeliberate infringements.

Be careful!

Do not enter personal data, privacy-sensitive information, confidential information or data that is accompanied by any kind of contract in GenAI tools (for example when you are writing a review or submitting research proposals, working on new research ideas, use GenAI tools to translate or re-write text, …).

By entering such data into the system, it becomes part of the publicly available knowledge consulted by AI and automated systems around the world. This may lead to violations of laws (GDPR, IP, …), contracts (NDA, …), and/or moral rights, and serious consequences (e.g. not able to protect your IP anymore, company fines, …). Always check whether you have permission or license to enter. If you are in doubt, you can check with the provider of the information or supporting services within the institution.

For more information on AI and GDPR, read the tip: GDPR: what should I take into account when developing or using AI. 



Last modified Jan. 30, 2024, 12:17 p.m.