During the winter, the text robot ChatGPT has gained more attention. A text tool that can create ready-made texts and answer questions based on the user's wishes.

Where do we draw the line for cheating?

The text robot is effective, but where the limit is if it is cheating to use the robot in your studies depends on how you do it. This is the opinion of Felix Dobslaw, computer scientist and researcher at MIUN, who also works with applying AI in different contexts.

"It's cheating if you haven't done the job yourself, but it becomes a grey area if you use it in the background. What you should remember is that chatGPT does not help with references and in addition, it can come with pure fabrications, says Felix Dobslaw.

How easy is it to prove cheating with AI?

The university has not noticed any trend of increased cheating so far, but has begun to discuss how they should relate to that text tool. Arne Wahlström, General Counsel at MIUN, says that there are detectors that can give indications that an AI robot has written a text, but that it is difficult to prove.

"Therefore, most people seem to think that it is examination forms that need to be adapted rather than suspending students because of cheating. We have not seen any influx of such cases in the disciplinary board, but suspicions have occurred in some cases, says Arne Wahlström, General Counsel at MIUN.