Sarunu bots

Researchers at the Institute of Electronics and Computer Science give countless interviews and contribute to many publications every year. Therefore, in order to bring EDI expertise to a wider audience, we are launching a column where we will share EDI staff experiences, recommendations and useful information in a popular scientific way. In the first interview of this column, Atis Elsts, EDI Senior Researcher, talks about chatbots.

Please introduce yourself!

Atis Elsts
Dr. sc. comp. Atis Elsts

I’m Atis Elsts, Senior Researcher and Chair of the EDI Scientific Council. I have been with the EDI for almost five years now, but this is not the first time I have worked here, as I was also here during my PhD studies more than 10 years ago.

Our topic today is chatbots. Can you tell us what they are and what they do?

Chatbots are a form of generative artificial intelligence (AI). It is an AI that – broadly speaking – can create something. There are different types: there are programmes that can create images, sounds, music, text. In fact, a chatbot, in response to a question asked by the user or a kind of input given by the user, generates an answer to the question or a continuation of the text. The most popular chatbot at the moment is ChatGPT, which became available to a wider audience in November last year. In general, these chatbots are based on GPT (Generative pre-trained transformer) technology, i.e. they generate something, are pre-trained and have a specific neural network model architecture.

The GPTs themselves are about four years old. I myself have tried them since about 2020, but at that time they were really just a toy. It was with ChatGPT that they became practically useful for many tasks.

How would you explain to a 10-year-old what a chatbot is?

A chatbot is a helpful assistant who can answer questions, write a poem on a topic you’ve asked, or compose a story. In a way, it is very clever, but it is also limited. Like fairy-tale genies who can fulfill people’s wishes. But this wish has to be explained extremely precisely in order to get what you want. It is similar with the chatbots, because you can get all sorts of nonsense from them.

Chatbot drawn by AI

What are the strengths and weaknesses of chatbots?

The strengths are the text editor, who can make a sloppily written text better, who can express himself beautifully, and who can answer questions well and professionally. The weakness is that if the chatbot does not know something, it can be made up, hallucinated or, more precisely, confabulated, and it is difficult to distinguish. These bots are trained not only to tell the truth, but also to tell it beautifully and clearly, which sometimes comes into conflict. For fiction, for example, this is not a problem, but for factual events, it is something to be very careful with.

What could be the threat from a chatbot?

It is very easy for students to use a talking bot in their learning and essay writing will no longer be a good form of testing. This affects the whole education system, as well as different professions.

Technology is developing very rapidly and there are very different visions of it.

Can a chatbot replace a human?

No! Artificial intelligence cannot do jobs at the moment, only certain tasks. In most boring office jobs, it can make complex tasks easier. Usually a human points and the chatbot listens, but in some cases that involve typing, chatbots can do up to 90% of the work, then that particular human will no longer be needed. This is the situation at the moment. In the future, of course, there may be different scenarios.

Using the chatbots, who is the author of the work? And what about ethics?

The answer about the author is easy – if we are talking about scientific publications, the chatbot will not be the author, because it can’t take responsibility.

Ethics, on the other hand, is more difficult to answer. Chatbots can help, for example, people who are not so good at English or who are not so fluent but who are good scientists, resulting in better publications. On the other hand, writing complex texts and publications is what a scientist should be good at. If this process is replaced by a chatbot, it is unfair to the scientists who do it themselves. It is definitely a balance that needs to be struck.

EDI
Institute of Electronics and Computer Science drawn by AI

There have also been concerns about the information used to train the chatbots, which may have been proprietary. This has been denied by the developers of the chatbots.

I myself have used a chatbot, for example in writing reports. In this case, there is practically no novelty, it is standard text that is repeated from time to time. There are no ethical problems in such cases. I have also used it in one publication, and the reference to the chatbot appears not in the list of authors, but in the acknowledgements.

What about the security of the data we ourselves provide to the chatbots?

We hope, of course, that this information will not be disseminated, but we can never be sure. No company can be sure that your data will not be leaked by an employee or an external threat. It is a risk and it would be inadvisable to put sensitive information out there.

Where can people find out more information about chatbots? What would you recommend?

You can ask the chatbot itself. I recommend following technology news on social networks, Reddit. In the paid version of ChatGPT you can install various plugins, for example on scientific publications. Other plugins will help you plan your trips.

If you want a text produced in your writing style, you first need to give samples of your text before you can start asking. You need to know how to use the modules correctly, which is why you need to read up on how to use chatbots to improve your productivity.


For further reading*

The most valuable examples of professional uses

Awesome ChatGPT Prompts:

https://github.com/f/awesome-chatgpt-prompts

Twitter thread: Examples of ChatGPT being used in professional work:

https://twitter.com/michael_nielsen/status/1600302211086458880

Opinion articles and scientific papers

Nature editorial: Could AI help you to write your next paper?

https://www.nature.com/articles/d41586-022-03479-w

Stack Exchange: Is it OK to generate parts of a research paper using a large language model such as ChatGPT?

https://academia.stackexchange.com/questions/191197/is-it-ok-to-generate-parts-of-a-research-paper-using-a-large-language-model-such

A critical research paper on the problems of big language models (GPT, etc.):

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

*Based on A. Elsts article “Artificial Intelligence Chatbots – a Tool for Scientists?”

Interviewed by Sanda Roze.