As somebody teaching undergrads in the age of mechanical reproduction… yeah, totally. I ask people to cite LLM usage in projects just like they’d cite research resources. Broad feedback:
- some students actually do, and in general, these ones are really interesting: one showed dialogue with the machine UI (ie, parsing what it’s suggested, changing, discussing), which indicates a more sophisticated understanding of both the topic in question, and their own coding ability; the code they ended up writing was very much theirs, and they had used a suitable resource intelligently. Another sent the machine their code, and said “can you help me improve this?” which again, for a first year, is a sophisticated response: I’d like you to help me refactor and improve. Both these responses indicate that they’ve understood the abstract of what I was asking - and learning to abstract and understand generic approaches, patterns, and what’s really being asked, is actually what I’m trying to teach.
- far more still don’t and think I won’t notice, depressingly. the giveaways are there; usually, it’s baroque/complex approaches to things that they’ve not been taught yet, or ways of naming variables that are… noticeably different to the rest of their output.
- and worst, most of this latter camp don’t pay attention to what the output is doing. Notably, many students missed the point of one brief I set, submitting code that used an approach I explicitly asked them not to use. If you ask popular LLMs how to solve the brief… it’s the first response it gives.
And the worst thing is the marking; once you start seeing the same tics/stuff appearing again and again, it’s like gorging on microwave meals.
I really felt for my colleagues marking essay subjects.
Teaching students how to learn is hard; it’s something they have to come to themselves a little. After all, everyone learns in different ways, and one thing people do in the school-university transition is discover how part of the learning journey is down to them, and they need to teach themselves.
This means that using the magic machines as “answer boxes” is terrible, because as @alanza points out, you really don’t learn much. Especially if you copy/paste results.
At the same time: as rubberducks, and as iterative tools, they contain useful elements - but that utility is often only visible when you’re very experienced. The description I like best is, to paraphrase where I heard it (possibly Steve Yegge), “a very enthusiastic junior developer who types very quickly”. They are frequently wrong on certain details, but often they unblock me or steer me onto a better path. They also often are better at explaining rather than doing. But those are really sophisticated, expert level manoeuvres (much as “using Google well and not just picking the first repsonse” was). If you’re earlier in your learning, how would you know you’d been given an unhelpful answer?
Also: if you are working in something they are not very knowledgable on (too obscure, too little content online, too recent), they will flail. They are not going to write Teletype because there’s not enough Teletype in the training data set. But they might be able to write you algorithms in Forth pseudocode, and that could be a really good starting point.
Finally, I’d point out that the most interesting dataset the main AI tool I use (Github Copilot) uses is my own data. It is most effective for me on large projects, where the final context it uses on top of everything else is my entire codebase; it becomes like an extension of me, using existing modules in established ways, understanding how to do stuff in the context of my project. That is far more useful than a) chat interfaces and b) generic solutions. As someone often working alone, it has noticeable, measured impact on what I’m able to do. Am I just letting it splat whatever it wants into the codebase? No.
(Am I in a kind of minority in the way I think about this and use it? Probably. Am I Very Tired Of 99.99% of The Discourse? Absolutely. Do I wish people would use the wonderful brain they were given and realise there are no shortcuts to really, properly, actually, learning things and becoming expert? Yes. But are these things without merit? I can’t say “yes” to that in any honesty. Would I still cope if they disappeared tomorrow? Yes. And I’d get to keep everything I’ve learned in the meantime).