Researchers found students to have fared better at accounting exams than ChatGPT, OpenAI's chatbot product.
Despite this, they said that ChatGPT's performance was "impressive" and that it was a "game changer that will change the way everyone teaches and learns - for the better."
The researchers from Brigham Young University (BYU), US, and 186 other universities wanted to know how OpenAI's technology would fare on accounting exams. They have published their findings in the journal Issues in Accounting Education.
In the researchers' accounting exam, students scored an overall average of 76.7 per cent, compared to ChatGPT's score of 47.4 per cent.
While in 11.3 per cent of the questions, ChatGPT was found to score higher than the student average, doing particularly well on accounting information systems (AIS) and auditing, the AI bot was found to perform worse on tax, financial, and managerial assessments. Researchers think this could possibly be because ChatGPT struggled with the mathematical processes required for the latter type.
The AI bot, which uses machine learning to generate natural language text, was further found to do better on true/false questions (68.7 per cent correct) and multiple-choice questions (59.5 per cent), but struggled with short-answer questions (between 28.7 and 39.1 per cent).
In general, the researchers said that higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT was found to provide authoritative written descriptions for incorrect answers, or answer the same question different ways.
They also found that ChatGPT often provided explanations for its answers, even if they were incorrect. Other times, it went on to select the wrong multiple-choice answer, despite providing
Read more on tech.hindustantimes.com