Universities Must Act Not Just Chat About Chatgpt
ChatGPT was launched on Nov 30, 2022. It is part of a broader set of artificial intelligence (AI) technologies developed by the San Francisco-based start-up OpenAI. Other AI technologies already in existence such as Microsoft Bing AI, Chatsonic, Jasper Chat, and Google Bard AI, are part of a new generation of artificial intelligence systems. They are technically called “large language models” or LLMs.
My understanding of AI is very minimal, but what I have been following is the range of reactions around the world, from educators and students.
Some educators worry that ChatGPT encourages plagiarism and other forms of academic dishonesty. They see this technology as an assault on critical thinking. Others welcome it as a positive learning tool that could enhance human curiosity, and expand critical minds.
In my opinion, it is too early to tell how damaging or advantageous ChatGPT is on teaching and learning. However, in general, one has to be very careful about technology and the potential for negative impacts on human (social) interaction.
A good example is when texting emerged as the primary method of human communication between 2000-2007. It has been almost two decades, and we are now seeing the deleterious effects on society.
Empirical research by social psychologists has revealed the detrimental effects of texting technology on human social interaction. Even though people today are more connected to one another through texting, they have become lonelier and more distant from one another in their unplugged lives. Texting has put a strain on our personal relationships because there is less intimacy in our interactions. The following is a simple explanation.
Texting mentally transports us to cyberspace, where you can detach yourself from intimacy. If you hate what you read, you simply scroll down to the next post, or block the person whose words provoke you. You could choose to respond in equally provocative text, then block them using your smartphone app.
This way you get the satisfaction of “hitting back”, while freeing yourself of any responsibility for hurting or infuriating the other party. You just type, send, hurt (or infuriate) and move on.
Intimacy disappears. It has no more role in the “texting mode” of social interaction. Body language, eye contact, and the human touch are absent. These natural and humane “checks and balances” which are present in non-texting communication, disappear in texting environments.
Since the early 2000s, universities in Malaysia, together with the higher education ministry and other relevant agencies, have produced guidelines on the proper use of the internet, sending official documents and grades via email, the use of texting apps such as Whatsapp, Telegram and iMessage, as well as relevant legal provisions in the event that rules are breached. During the lockdown months of the recent Covid-19 pandemic, more guidelines were issued.
We managed quite well.
However, it is now April 2023 and the higher education ministry has yet to come up with any guidelines on the use of AI technology in universities. The minister Khaled Nordin announced on March 17 that he and his ministry “are working on guidelines”. Speed is of the essence, especially when it comes to technology, so we need these guidelines ASAP.
Also, it is even more urgent in the context of Malaysia because our academic culture is no stranger to dishonesty. Plagiarism and a high incidence of academic dishonesty are rampant in our universities. We need guidelines to regulate the use of ChatGPT by lecturers and students, pronto.
Since ChatGPT’s release in November 2022, many universities around the world have already responded with their respective guidelines. Perhaps Malaysian authorities could learn from such efficiency.
In February 2023, Yale University’s Poorvu Center for Teaching and Learning created their guidelines, and launched it on Feb 14 via a panel discussion.
Their detailed website has elaborate instructions about adapting current teaching methods to accommodate AI technology. The website also has very clear instructions for lecturers.
One paragraph reads:
“Instructors should be direct and transparent about what tools students are permitted to use, and about the reasons for any restrictions. Yale College already requires that instructors publish policies about academic integrity, and this practice is common in many graduate and professional school courses. If you expect students to avoid the use of AI chatbots when producing their work, add this to your policy”.
It is clear that Yale University allows a degree of freedom in the use of AI technology in its classes. Furthermore, the consistent emphasis is on “academic integrity” and honesty.
La Trobe University in Australia has guidelines displayed on its website, informing students how to reference chatbots.
For example, it poses the following question as a guide to students. “How do I reference something created by generative AI?”
The answer provided in the guidelines is as follows:
“This is interim advice only and subject to change. We recommend that you base the reference for generative AI content on the referencing guidance for personal communication – if you are using generative AI to assist with your assignment. Generative AI tools like ChatGPT cannot accurately cite their own sources. Any references they provide may be false or non-existent – you should always check the original source for any references that are generated. References should provide clear and accurate information for each source and should identify where they have been used in your work.”
As for acknowledging AI, which could play a key role in helping students arrive at the final product, the guidelines have this to say:
“Where you have used generative AI to assist you with your assignment, you usually should acknowledge this. An acknowledgement might look something like this: ‘Whilst the writing is my own and I take responsibility for all errors, ChatGPT was used to create the initial section structure for this essay’”.
As with Yale University, La Trobe University also emphasises academic honesty.
In February, the Chinese University of Hong Kong (CUHK) produced its new guideline on the use of AI technology in teaching, and learning. Students could be expelled if they are found to have used AI tools, including ChatGPT, improperly or without authorisation in their work.
Unlike Yale and La Trobe, CUHK students are required to get permission before using these tools. The website also clarified that the university is improving on existing plagiarism software, to detect whether students had used AI tools in their work.
Our universities in Malaysia must get on board quickly. The higher education ministry should speed up and produce guidelines. We may not need to ban AI technology, but we must be smart about drawing up clear regulations. A swift issuance of guidelines would indicate, in part, the ministry’s seriousness in curbing academic dishonesty and plagiarism. More foot-dragging could prove the opposite. - FMT
The views expressed are those of the writer and do not necessarily reflect those of MMKtT.
Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :
http://malaysiansmustknowthetruth.blogspot.com/2023/04/universities-must-act-not-just-chat.html