ChatGPT’s responses are entertaining because we know we’re not communicating with a human being. But that’s exactly what you need to encourage the most complex learning.
Human interaction is an essential part of good teaching. You can’t do that with something that is not, in itself, human – something that cannot form relationships but can only mimic what it thinks good communication and good relationships sound like.
Even when it comes to providing right answers, chatbots have an extremely high error rate. People extolling these AI’s virtues are overlooking how often they get things wrong.
Anyone who has used Siri or Alexa knows that – sometimes they reply to your questions with non sequiturs or a bunch of random words that don’t even make sense.
ChatGPT is no different.
As more people used it, ChatGPT’s answers became so erratic that Stack Overflow – a Q&A platform for coders and programmers – temporarily banned users from sharing information from ChatGPT, noting that it’s “substantially harmful to the site and to users who are asking or looking for correct answers.”
The answers it provides are not thought out responses. They are approximations – good approximations – of what it calculates would be a correct answer if asked of a human being.
The chatbot is operating “without a contextual understanding of the language,” said Lian Jye Su, a research director at market research firm ABI Research.
“It is very easy for the model to give plausible-sounding but incorrect or nonsensical answers,” she said. “It guessed when it was supposed to clarify and sometimes responded to harmful instructions or exhibited biased behavior. It also lacks regional and country-specific understanding.”
Eager for any headline that didn’t center on his disastrous takeover of Twitter, Musk endorsed the new AI even though he left the company in 2018 after disagreements over its direction.
However, AI and even chatbots have been used in some classrooms successfully.
Professor Ashok Goel secretly used a chatbot called Jill Watson as an assistant teacher of online courses at the Georgia Institute of Technology. The AI answered routine questions from students, while professors concentrated on more complicated issues. At the end of the course, when Goel revealed that Jill Watson was a chatbot, many students expressed surprise and said they had thought she was a real person.
This appears to be the primary use of a chatbot in education.
“Students have a lot of the same questions over and over again. They’re looking for the answers to easy administrative questions, and they have similar questions regarding their subjects each year. Chatbots help to get rid of some of the noise. Students are able to get to answers as quickly as possible and move on,” said Erik Bøylestad Nilsen from BI Norwegian Business School.
However, even in such instances, chatbots are expensive as yet to install, run and maintain, and (as with most EdTech) they almost always collect student data that is often sold to businesses.
Like this post? You might want to consider becoming a Patreon subscriber. This helps me continue to keep the blog going and get on with this difficult and challenging work.
And my favorite character is the computer HAL 9000.
In the future (now past) of the movie, HAL is paradoxically the most human personality. Tasked with running the day-to-day operations of a spaceship, HAL becomes strained to the breaking point when he’s given a command to lie about the mission’s true objectives. He ends up having a psychotic break and killing most of the people he was supposed to protect.
It’s heartbreaking finally when Dave Bowman slowly turns off the higher functions of HAL’s brain and the supercomputer regresses in intelligence while singing “A Bicycle Built for Two” – one of the first things he was programmed to do.
I’m gonna’ be honest here – I cry like a baby at that point.
But once I clean up my face and blow my nose, I realize this is science fiction – emphasis on the fiction.
I am well aware that today’s calendar reads 2020, yet our efforts at artificial intelligence are not nearly as advanced as HAL and may never be.
That hasn’t stopped supposedly serious publications like Education Week – “The American Education News Site of Record” – from continuously pretending HAL is right around the corner and ready to take over my classroom.
What’s worse, this isn’t fear mongering – beware the coming robo-apocalypse. It’s an invitation!
It was truly one of the dumbest things I’ve read in a long time.
Bushweller, an assistant managing editor at Education Week and Executive Editor at both the Ed Tech Leader and Ed Week’s Market Brief, seems to think it is inevitable that robots will replace classroom teachers.
These are kids without all the advantages of wealth and class, kids with fewer books in the home and fewer native English speakers as role models, kids suffering from food, housing and healthcare insecurity, kids navigating the immigration system and fearing they or someone they love could be deported, kids faced with institutional racism, kids who’ve lost parents, friends and family to the for-profit prison industry and the inequitable justice system.
So “chronically low-performing” teachers would be those who can’t overcome all these obstacles for their students by just teaching more good.
I can’t imagine why such educators can’t get the same results as their colleagues who teach richer, whiter kids without all these issues. It’s almost like teachers can’t do it all, themselves, — and the solution? Robots.
But I’m getting ahead of myself.
Bushweller suggests we fire all the human beings who work in the most impoverished and segregated schools and replace them… with an army of robots.
But the future envisioned by technophiles like Bushweller has NO such people in’t – only robots ensuring the school-to-prison pipeline remains intact for generations to come.
“It makes sense that teachers might think that machines would be even worse than bad human educators. And just the idea of a human teacher being replaced by a robot is likely too much for many of us, and especially educators, to believe at this point.”
The solution, he says, isn’t to resist being replaced but to actually help train our mechanistic successors:
“…educators should not be putting their heads in the sand and hoping they never get replaced by an AI-powered robot. They need to play a big role in the development of these technologies so that whatever is produced is ethical and unbiased, improves student learning, and helps teachers spend more time inspiring students, building strong relationships with them, and focusing on the priorities that matter most. If designed with educator input, these technologies could free up teachers to do what they do best: inspire students to learn and coach them along the way.”
Forgive me if I am not sufficiently grateful for that privilege.
Maybe I should be relieved that he at least admits robots may not be able to replace EVERYTHING teachers do. At least, not yet. In the meantime, he expects robots could become co-teachers or effective tools in the classroom to improve student learning by taking over administrative tasks, grading, and classroom management.
And this is the kind of nonsense teachers often get from administrators who’ve fallen under the spell of the Next Big Thing – iPads, software packages, data management systems, etc.
Bushweller cites a plethora of examples of how robots are used in other parts of the world to improve learning that are of just this type – gimmicky and shallow.
It reminds me of IBM’s Watson computing system that in 2011 famously beat Ken Jennings and Brad Rutter, some of the best players, at the game show Jeopardy.
What is overhyped bullcrap, Alex?
Now that Watson has been applied to the medical field diagnosing cancer patients, doctors are seeing that the emperor has no clothes. Its diagnoses have been dangerous and incorrect – for instance recommending medication that can cause increased bleeding to a hypothetical patient who already suffered from intense bleeding.
Do we really want to apply the same kind of artificial intelligence to children’s learning?
AI will never be able to replace human beings. They can only displace us.
What I mean by that is this: We can put an AI system in the same position as a human being but it will never be of the same high quality.
Watson (no relation to IBM’s supercomputer) of the Oxford Internet Institute and the Alan Touring Institute, writes that AI do not think in the same way humans do – if what they do can even accurately be described as thinking at all.
These are algorithms, not minds. They are sets of rules not contemplations.
An algorithm of a smile would specify which muscles to move and when. But it wouldn’t be anything a live human being would mistake for an authentic expression of a person’s emotion. At best it would be a parabola, at worst a rictus.
1) DNN’s are easily fooled. While both humans and AIs can recognize things like a picture of an apple, computers are much more easily led astray. Computers are more likely to misconstrue part of the background and foreground, for instance, while human beings naturally comprehend this difference. As a result, humans are less distracted by background noise.
2) DNN’s need much more information to learn than human beings. People need relatively fewer examples of a concept like “apple” to be able to recognize one. DNN’s need thousands of examples to be able to do the same thing. Human toddlers demonstrate a much easier capacity for learning than the most advanced AI.
“It would be a mistake to say that these algorithms recreate human intelligence,” Watson says. “Instead, they introduce some new mode of inference that outperforms us in some ways and falls short in others.”
Obviously the technology may improve and change, but it seems more likely that AI’s will always be different. In fact, that’s kind of what we want from them – to outperform human minds in some ways.
However, the gap between humanity and AI should never be glossed over.
I think that’s what technophiles like Bushweller are doing when they suggest robots could adequately replace teachers. Robots will never do that. They can only be tools.
For instance, only the most lonely people frequently have long conversations with SIRI or Alexa. After all, we know there is no one else really there. These wireless Internet voice services are just a trick – an illusion of another person. We turn to them for information but not friendship.
The same with teachers. Most of the time, we WANT to be taught by a real human person. If we fear judgment, we may want to look up discrete facts on a device. But if we want guidance, encouragement, direction or feedback, we need a person. AI’s can imitate such things but never as well as the real thing.
So we can displace teachers with these subpar imitations. But once the novelty wears off – and it does – we’re left with a lower quality instructor and a subpar education.
As a society, we must commit ourselves to a renewed ethic of humanity. We must value people more than things.
And that includes a commitment to never even attempting to forgo human teachers as guides for the most precious things in our lives – our children.
“Algorithms are not ‘just like us’… by anthropomorphizing a statistical model, we implicitly grant it a degree of agency that not only overstates its true abilities, but robs us of our own autonomy… It is always humans who choose whether or not to abdicate this authority, to empower some piece of technology to intervene on our behalf. It would be a mistake to presume that this transfer of authority involves a simultaneous absolution of responsibility. It does not.”