County, Features

AI and You, Part 3: How to use ChatGPT wisely

Image by Gerd Altmann from Pixabay

> Read Part 1: Artificial Intelligence and You

> Read Part 2: How AI and ChatGPT Work

RBC |  Last week I described the inner workings of ChatGPT. It is a neural network trained to recognize the most probable next word in a sequence. Given a sequence of words, it figures out what word is most likely to come next. From that simple basis it can already accomplish all kinds of marvelous things, and it is evolving rapidly. ChatGPT-3.5 provided noticeable improvements over 3.0, and ChatGPT-4.0 has leaped much farther ahead. 3.5 can produce polished documents, engage in interesting conversation, and write professional computer code. 4.0 demonstrates capacity for reasoning that rivals aspects of human intelligence. Everything I say needs an asterisk. Some of what I’ve written is probably already out of date, since yesterday. In this article I will examine what ChatGPT can and cannot do now, and I will offer some suggestions how to use it wisely. Many of those suggestions relate to school classrooms and home schools, since education has felt the greatest impact so far, but they apply more generally. 

First what it cannot do (yet). For starters, it is not a search engine like Google or Firefox or Windows Explorer. ChatGPT trained purely on strings of text from all across the internet. Ask it “what is the shape of the earth?” and it will respond with the most probable sequence of words it found in its training. If it trained on a whole lot of text with the words “earth” and “flat” in close context, it will tell you that “the earth is flat.” In contrast, given proper search queries the established search engines access authoritative data bases and are (usually) more reliable. 

The flat earth is one example of what’s called “hallucination” in tech lingo. ChatGPT may blather along happily, spilling out grammatically correct sentences that make no sense. The engineers who build ChatGPT propose to improve fact-checking by referencing the GPT back to Wikipedia. Presumably future responses purporting factual information will be checked against text in original, evidence-based reports. But not yet. 

A related caveat: if you ask GPT for a reference to its output it is just as likely to hallucinate as with any other query. Suppose you asked it to “discuss the historical context of John Brown’s raid on Harper’s Ferry, and provide references to the original historical documents.” One would hope that the GPT would provide accurate documentation, real historical records. But it doesn’t necessarily do that. It’s still just stringing the most probable word sequences. It sometimes makes up authors and titles just like it manufactures conversational chat. 

A second problem: ChatGPT cannot (yet) crunch numbers. It is not a calculator. Ask it to help solve an algebraic equation. It provides a nice step-by-step rubric for the solution, then often-times it completely botches the numerical calculation. There’s a fix in the works, and newer versions of ChatGPT have improved considerably. The GPT-4.0 engineers installed API’s, links from the GPT to the real calculator already on your computer or cell phone, and GPT uses that calculator when it knows it needs to do the math. But not always. 

More worrisome, and perhaps the greatest immediate threat from ChatGPT and its cousins, is deliberate use of the platforms to mislead. The GPT’s are good at mimicking text. If you feed the GPT volumes of Hemingway, it learns to mimic Hemingway. It will produce new and compelling stories written in Hemingway’s sparse, elegant style. Ditto any one of (pick your favorite) world leaders or celebrities. Add now the similar neural network audio and video production technologies. They can (already have) produced life-like video and audio clips of politicians’ statements that never occurred, celebrity events that never happened. 

That, my friends, is a real concern. We already are swamped with “alternative truths” on social media. Now we’ll be reading and looking at and listening to apparently authentic (“I saw it with my own eyes!”) productions that never happened. Imagine trying to determine the authenticity of a rogue AI video feed purporting to be Vladimir Putin at a Kremlin press conference announcing that his submarines have launched the Russian navy’s nuclear torpedo into New York harbor. 

Those are (some of) the dangers. We have to learn how to confront them. The genie is out of the bottle. Social media already causes plenty of worries, especially its effects on the mental health of adolescents. Like social media in general, ChatGPT and its video/audio cousins are new pocket knives. They are great tools, but we have to learn to use them carefully. So how to? 

First piece of advice: try it out. ChatGPT-3.5 is still free for general use. You can log in at the OpenAI web site (OpenAI, 2023). Note that the login page offers links to all kinds of examples, but you can get started on your own by entering a query in the little text box at the bottom of the screen. “How are you today, GPT?” or “Please explain to me how ChatGPT works” or whatever crosses your mind. You can continue a conversation for as long as you like. 

Once you’ve got the hang of it, try its productivity. Ask it to write a lease agreement for your apartment, or ask it to write an essay about the origin of NATO, or ask it to write a story about Aunt Millie, her niece Gwendolyn, and Gwen’s pet turtle Spud on their trip to visit cousin George in a lighthouse on the Hebrides. Ask ChatGPT for a re-do if you don’t like the first tale it weaves. 

Like any other new tool, it takes practice to figure out how to enter queries and it takes experience to figure out what GPT responses are accurate and what’s not. Always be skeptical. Always double check any ChatGPT document you have to rely on. If you ask it for some kind of legal form – a contract or a will, say – you had better have it approved by your attorney. If you’re a pilot and ask it for a flight plan – well you’d better produce a plan by standard means and see how they compare. 

Classrooms have probably experienced the biggest disruptions so far. One survey reported two out of five students at an Ivy League university used ChatGPT to help on their final exams last Spring semester. And it has certainly percolated into the classrooms here in Meeker and Rangely. Just ask the teachers. Its availability offers great temptation for cheating and shortcuts. Parents and teachers have to figure out how to educate kids in its proper use. 

That’s a whole lot easier said than done. Some ideas for starters, suggestions appropriate, I hope, not only for classrooms and home schools but for learners in general:

First, ChatGPT can be a great teaching assistant. It writes detailed lesson plans in an eye-blink, plans suitable for large classes or for home study. Just tell the GPT what you want. Study guides. Quiz questions. Class presentations. You name it. Only be sure to check accuracy. Note that you can ask ChatGPT to write handouts at any particular grade level, e.g. “ChatGPT, please write a quiz with 10 questions about the War of 1812. Write the questions as appropriate for fifth grade.” Try it out. 

ChatGPT also can provide a friendly, one-on-one tutor. Students need guidance forming proper queries, but with the teacher’s help and some practice they’re off and running into new studies. For example, ask ChatGPT “Please show me how to find how high a toy rocket will rise if it is launched at an angle of 60 degrees with an initial velocity of 50 meters per second. Please go step by step, and pause in between steps to let me ask questions.” Try it. And of course, have the student check the ChatGPT math. That’s a good exercise in itself. 

There are many other opportunities enabled by ChatGPT. Students can improve their writing skills by editing and critiquing GPT documents. Lessons with ChatGPT can help students (and the rest of us) learn how to determine fact from fantasy in our social media world. Check the GPT output against verified primary sources. Discuss. What’s real? What’s not? How do we know? 

Yes, some students will present GPT essays as their own. Some students will access GPT to answer test questions. Those problems will persist. They’re not new, only ChatGPT is just the latest enabler. GPT engineers have tried watermarks, hidden code that tells the teacher (or employer or project team supervisor) if ChatGPT wrote the document. But those attempts have met little success so far, and watermarks present a nigh impossible technical challenge, even theoretically. Insert a watermark and hackers probably already have dreamed up half a dozen work-arounds. 

Equity also presents a challenge. Students with internet connectivity at home will have ChatGPT at their fingertips. Students without a computer and internet will not. Our two school districts have done a good job enabling access to technology. ChatGPT or similar AI has to be included among those technology tools. 

I’ve dwelt on issues in education because that’s where we’ve seen the most obvious impact. But the shock waves from ChatGPT extend far beyond our schools. Next week I’ll consider wider threats to our economy and, perhaps, to human existence. I’ll try to end this series of articles, though, with cheerful speculation that a ChatGPT coming soon might save our civilization. 

References:

OpenAI. 2023. ChatGPT login. https://chat.openai.com/auth/login


BY BOB DORSETT | Special to the Herald Times