“ChatGPT is a development on par with the printing press, electricity and even the wheel and fire.”
That’s according to Lawrence Summers, the former Treasury Secretary in the Obama administration. I had heard about ChatGPT before, but I knew nothing about it. (Actually, when I listened to Summers, it sounds like he’s referring more broadly to the ability of AI to think and express itself like humans.)
Here’s what I learned from an NYT article.
“In ChatGPT’s case, it read a lot. And, with some guidance from its creators, it learned how to write coherently — or, at least, statistically predict what good writing should look like.”
“It can help research and write essays and articles. ChatGPT can also help code programs, automating challenges that can normally take hours for people. Another example comes from a different program, Consensus. This bot combs through up to millions of scientific papers to find the most relevant for a given search and share their major findings. A task that would take a journalist like me days or weeks is done in a couple minutes.”
The benefits here are obvious, but, off the top of my head, here are some drawbacks:
- For humans, the ability to comb through lots of information and find the most relevant information could deteriorate.
- My sense is that different people make different judgments about what is relevant; the ability to do this, which includes making connections with other information, including seemingly unrelated information, can differ significantly from person to person. Will this capability become more uniform if done by an AI?
- My sense is that this process can lead to important insights. How will AI impact that?
In a survey, a group of scientists who work on machine learning had even more dire response:
Nearly half said there was a 10 percent or greater chance that the outcome would be “extremely bad (e.g., human extinction).” These are people saying that their life’s work could destroy humanity.
This seems like a big problem, one that that seems blatantly foolish:
“The problem, as A.I. researchers acknowledge, is that no one fully understands how this technology works, making it difficult to control for all possible behaviors and risks. Yet it is already available for public use.”
To go ahead with something that we don’t fully understand, but could pose an existential threat to humanity (albeit a relatively small probability) seems foolish. And how can we accurately assess the risk if we don’t fully understand how the technology works?
9 thoughts on “ChatGPT Thread”
This Atlantic article–“The End of High School English”–written by a high school English teacher–has a more grim assessment of the effects on GPT, at least as it pertains to writing in schools–and maybe writing in gene
In a nutshell, when students inevitably get access to GPT, they’ll be able to use it to create essays. Teachers won’t be able to know if the student or GPT did the work.
I share the teacher’s grim view, and I’ll explain that by responding to section of the piece:
My knee-jerk reaction: It is absolutely worth doing. Not only that, it’s essential. To me, learning to write well is the same thing as learning to think way. Indeed, I’m not sure it’s possible to be able to think well without being able to write well. ( That’s probably going too far.)
Knowledge, understanding, and insight are things created. For an individual to develop substantive knowledge and understanding, the individual has process and create this themselves. I don’t know mean that each person has to create knowledge and understanding from a blank slate. Instead, they must digest and build up the knowledge in a way that makes sense to the individual.
To me, writing is the key tool for doing this. I have a hard time imagining anything that can adequately replace this.
Opinion Here’s how teachers can foil ChatGPT: Handwritten essays
I liked hearing about the side benefits of hand-written essays.
Right now, I would not want to write by hand. It takes too long, and it would be more of a mess for me. I wonder if this could mean the comeback for typewriters…or some word processor….basically a computer type of device that is not connected to the internet.
Required hand-written essays will skew the playing field even more for many students with language-based learning differences, but if certain accommodations can be made (and available to all, whether diagnosed or not) it could work.
I blue-booked almost every English exam I ever had and while it was laborious, I managed okay.
But writing long-hand was more common and frequent back in our day. Even though I grew up up writing a lot in long-hand, I now dislike writing in long-hand, especially for longer periods of time. I can imagine this would be difficult for students now.
I haven’t read everything mentioned here because when this was first posted, there was just furor and seeming near-panic, and my inclination is to let that stuff settle down before I wade in.
I’ve proposed using ChatGPT in some of the work we do around here, and it seems to have helped in some areas (writing acknowledgment letters to donors) and not in others (coming up with taglines for multi-million-dollar fundraising campaigns). As we in the office have thrown around a few ideas, it became clear I couldn’t ignore it any more. Which is why I found this article (and a few others like it) most interesting.
This is more along the lines of what I was thinking during the last winter’s uproar. As a teacher, I’d like to know how to leverage such a cool device? As a writer, I wonder if it can make me better at my job. Can it make my coworkers better writers too, as many of them want to be?
And yes: what are the long-term consequences? I think drawing a comparison between AI and spellcheck is a good start. What were the negative consequences of spellcheck, now that we have all these decades to look at it?
I have to lead a discussion on a two articles about ChatGPT, and I want to see if my basic understanding of it is accurate.
Here goes. ChatGPT is essentially a program(?) that answers questions by a) drawing on available information from the internet, and b) answering these question predicting the words that should appear in a sequence, which will form proper sentences. For example, if I ask ChatGPT to explain Quantum mechanics in the style of Shakespeare, ChatGPT will draw on available information about quantum mechanics and Shakespeare’s writing, and then predict the sequence of words that will satisfy the query.
Is that basically correct?
(Note: I meant to respond your post above, but I forgot to. Will respond later.)
The gist sounds right to me, but if anything, maybe over simplistic. Mitchell can correct me if I’m wrong, but the power of AI seems to be the ability to recognize. Prior to AI, someone would have to “label” things and put that info into a computer to be analyzed. As in, this is a horse, this is a boat, this paragraph has an upbeat tone, etc. AI is now able to do that on its own. On my iPhone I can search through my pictures, and type potato and it will try and return anything that looks like a potato, probably even mashed potatoes. To me that’s a form of AI. It uses all the pictures that were already labeled potato as a reference, but looks at a picture it never seen before and can make the determination or learn that this is a potato as well. That’s AI on the most simplistic form, but I’ve seen an NHK show in which computers with AI were put into taxi cabs. The AI would use information on weather, what time the train/bus will come, time of day, when certain shops in the area are busy, where people are based on their phone, where other taxi cabs are, etc, to determine where the taxi would best wait to get a customer at any given time. In my simple mind, the power of AI in this instance seems to be to take data in real time, determine a weight of importance for each piece of data, and then solve the problem.
“Recognize” is a really weighty word in this context, as it applies awareness and understanding–cognizance on the part of the program. Based on what I’ve heard, I don’t think ChatGPT has this–although it creates the impression that it does.
In contrast, a program predicting, based on algorithms (and I’m not sure what else), the next appropriate word in a sequence of words avoids this implication. To construct coherent and appropriate response by predicting the words in sequence doesn’t require awareness or understanding.
While “predictions” may seem downplay ChatGPT’s abilities, I don’t mean it that way. The predictions can lead to very impressive results.