- AI is a hot topic of conversation due to recent advances in processing power and algorithms. ChatGPT is a generative pre-trained transformer-based general large language model. It creates text based on statistical probability, making it a text-generation machine.
- AI is not God and cannot take over the world. It does not have consciousness, free will, or intentionality. AI is pre-programmed and lacks responsibility, so the responsibility lies with the constructor or programmer.
- There are 4 levels of ethics in technology engineering: built-in ethical issues, machines that incorporate ethical decisions, machines that incorporate decision algorithms, and intentionality/moral principles (which do not yet exist).
- AI is not a knowledge base like Wikipedia or Encyclopedia Britannica and does not produce truth or original works, but it produces text and can generate false information or hallucinations.
- AI in education is perceived as dangerous by some, due to fear of new technology, but it is important to remember that it is just a tool. Technology is an important part of education and we should look for convergence, synergies, and best usage models.
- AI is still young, so the ways it can be used in education are still being experimented with and defined.
- Customized pedagogical advice and creating original knowledge are not effective use cases for AI in education.
- Cheating is a concern with AI in education, but it is important to focus on the promising use cases such as Bing search, Microsoft copilot, translation, transcription, summarization, and generating problems and questions based on examples.
- We cannot ban AI in education, as it has already been adopted and can increase productivity. Instead, we must learn how to use it responsibly and ethically.
In recent years, and even more in recent days, artificial intelligence has been on everyone’s lips. This is due to significant advances in computing power and algorithms. One of the most talked about examples of artificial intelligence is ChatGPT, with the more recent GPT4, which is significantly stronger than GPT3. In fact, this text was mostly generated by ChatGPT, based on my notes and ideas, and refined manually. Other examples of recent sensational AI include Midjourney, MS Bing search, Microsoft Copilot, and Google Bard.
There are two types of reactions to this new technology:
- One is the exaggerated expectation that artificial intelligence will solve all of humanity’s problems.
- The other is rejection, with fears that artificial intelligence and robots will take over the world and steal our jobs. However, this is only true for those who are incompetent, so those who are skilled should be safe.
Some people even worry that technology will kill creativity and destroy human civilization. They argue that we can no longer give assignments to students, and therefore suggest banning technology altogether from schools.
2. What AI is
Let’s start with the example of ChatGPT, which is an automatic conversational engine. ChatGPT uses a generative pre-trained transformer-based general large language model. More specifically, it uses statistical probability to generate text by calculating the most probable words based on the input text and previous words. In other words, it is a machine that generates text. Not knowledge, not ideas. Just text.
Artificial Intelligence (AI) is not a new concept. In fact, it has been taught in universities for 50 years, with topics such as statistics, neural networks, graph analysis, and expert systems. AI is also present in many everyday applications, such as text completion in MS Word, Google Docs, and MS Outlook, where the application proposes words based on what you write. Google Translate is another example, as well as speech-to-text technology used for transcribing and generating subtitles in YouTube videos. One great tool is Meetgeek. It takes notes during meetings, which is especially helpful for busy professionals. Another popular AI tool is Github Copilot, which we also use at Hermix for generating and refining software code.
The reason why AI tools like ChatGPT have gained popularity in recent years is not only because of smarter algorithms, but mostly because of the increased processing power available. With this increasing power, AI has become more accessible and efficient, allowing us to create new and exciting applications that can enhance our lives in numerous ways.
3. What AI is not
AI is not God, it is not omniscient, and it will not take over control of humanity. As Stephen Hawking once joked: humans finally created artificial intelligence and asked, “Does God exist?” The AI machine responded, “Yes, now it does.”
I will present below 3 points about what AI is not.
3.1. Contemporary AI is not TRUE AI
While AI has advanced significantly in recent years, it is important to remember that it is not a deity, and it is not capable of all-knowing or controlling the world. Its abilities are limited to the tasks it is programmed to perform, and it does not possess consciousness, free will, or intentionality.
There is a famous joke in the world of AI that goes like this: At a technology startup, investors usually talk about AI, managers talk about machine learning (ML), and programmers actually do statistics and linear regressions.
Accordingly, we cannot speak of true ethics. But there is an ethical impact of AI. Racial discrimination is an example – AI systems provide racial and discriminatory responses.
This makes the discussion about ethics relevant, and some authorities such as the European Commission and the UN are actually trying to legislate AI.
Responsibility is a sensitive ethical topic, but for me, the answer is simple, and I discussed it also here. AI is like any machine. The builder has responsibility, not the machine. Machines do not have responsibility because they do not have their own will. A Tesla car does not make a driving decision; it is the programmer that writes the code, the rules, that writes that decision for the car.
Because AI does not understand what it does. For example, I gave ChatGPT text in 2 languages, and while obvious in retrospect, it didn’t understand the input until I specifically instructed it that the input is in 2 languages. ChatGPT does not “understand”. It doesn’t understand the background, context, or problem.
Theoretical research proposes 4 levels of ethics in technology and in engineering in general (Moor, 2006):
- AMA-1. Built-in ethical issues.
- AMA-2. Machines that incorporate ethical decisions. For example, prohibiting children from accessing the internet.
- AMA-3. Machines that make ethical decisions based on automated algorithms.
- AMA-4. Intentionality, moral principles – this does not exist yet, and they will not exist in the foreseeable future. In fact, Asimov’s 3 laws of robotics, or Westworld free-will robots do not exist yet. And we are far from understanding how AMA-4 machines might behave – e.g. when considering that all of Asimov’s novels are about how robots break Asimov’s 3 laws of robotics.
Other sensitive ethical issues are AI explainability, interpretability, or the implications of AI generative models on copyright – including unintentional plagiarism. For example, it is still unclear who owns the copyright of a painting or text generated by AI: the user, the owner of the model, or even the owner of the initial training data? E.g. if we train a language model with the entire work of Hemingway, and the model generates a new novel in the style of Hemingway: who owns the copyright to this new novel?
3.2. AI and ChatGPT are not knowledge bases
AI is not Wikipedia or Encyclopedia Britannica. It can produce text, but it doesn’t necessarily produce truth. In fact, sometimes it can produce hallucinations or even outright false statements, that it utters with incredible determination. It can even be manipulated to do so – like a child, it is susceptible to manipulations.
As an example, MS Bing’s search engine was recently shown to make false statements, then invented facts to explain and cover up its own mistakes, and was manipulated to show emotions, accusing its own users of dishonesty and manipulation, and even threatening people.
Also, our colleague Bogdan tested ChatGPT to generate company descriptions based on data such as name, address, financial information and history. The AI generated short summaries, which initially were great. However, the results soon started to converge – the engine started to hallucinate, generating false information. All company summaries were becoming similar: this company was created in 1984, is innovative, and has offices in New York.
3.3. AI is not original nor creative
It cannot generate new knowledge or ideas on its own. It only produces compilations of existing knowledge.
The Sumplete case is illustrative. A user asked ChatGPT to invent a new game, similar to Sudoku, but original; complete with code. And it did – this is how Sumplete was born. But it turns out that Sumplete was not original; there were at least 2 identical games on the market. And users even tricked ChatGPT to invent the same game all over again, together with the same code, while still claiming to produce original creations.
4. AI in education
AI is a powerful tool, but it is often perceived as dangerous because people are scared of new tools. This is a common reaction to new technology. However, the fact that AI is a powerful tool makes it especially important in education.
Collaborative technologies have also gained momentum in the last five years. It is beautiful to see students taking notes and working on collaborative projects using tools like MS Office 365 or Google Docs.
We cannot eliminate technology from education. Instead, we should look for convergence, synergies, and the best usage models to ensure that technology is not in opposition to education, but instead it helps.
Refusing to use phones, laptops, or AI in education today is like refusing paper and pens a few hundred years ago. During the COVID crisis, we also saw the incredible impact of technology and complexity, and we turned to engineering and technical tools to solve a medical and social crisis. In fact, my PhD thesis tackles complexity management, and particularly positive complexity. I argue that our world has become more complex, and using advanced engineering tools has made our society and schools even more complex – but this brings benefits to society and education.
AI is a relatively young technology, so its use-cases are still being experimented with and defined. Of course, they are contextual, and depend on the educational objectives and the teaching, learning, and evaluation strategies that we apply at any given moment.
4.1. What doesn’t work
Some use-cases don’t work yet in education:
- providing customized pedagogical advice.
- creating original knowledge.
Also, banning technology altogether is not possible, much like trying to ban the use of pens, paper, Microsoft Word, or email.
Yes, there are schools trying to limit the use of technologies in the classroom. New York City public schools, for example, blocked access to ChatGPT. But a recent survey found that 22% of students use the chatbot to help them with coursework on a weekly basis, and more than half of teachers surveyed reported using ChatGPT at least once since its release, with 40% using it at least once a week.
4.2. What is sensitive
We should be mindful of cheating and plagiarism. ChatGPT didnt invent plagiarism – it’s been around since Wikipedia and even since traditional public libraries – but the rise of tools like ChatGPT poses new challenges. On the other hand, if a student knows how to use such resources to create original, intelligent content, this should be beneficial to education.
4.3. What works
There are obvious useful use-cases of AI technology in education. Search engines like Bing can help students quickly find relevant information for their projects. ChatGPT, MS Copilot or Simplified can help create and arrange documents, and summarize conversations.
Translation and transcription tools are highly effective, as well as summarization software, extracting keywords and NER, software that corrects grammar, punctuation, style. ChatGPT is great at creating problems and questions similar to given examples.
Ultimately, education technology has the potential to be a powerful tool for improving learning outcomes, but it’s important to be mindful of its limitations and potential downsides.
We cannot ban AI or intelligent conversational models in schools. They increase productivity and they are already being adopted by society.
We need to learn how to use them effectively. With the right approach, we can harness the power of AI to enhance our learning experiences and achieve better outcomes.
This means that we must understand the best usage models, what works and what doesn’t work. And we must teach students how to use AI responsibly and ethically.