Machine-generated code

I don’t have a good citation for this, but I read somewhere that the creator of ChatGPT said it was pretty much crap (Paraphrased of course).

I found that simultaneously funny and terrifying… And somehow not at all surprising…

1 Like

I think the better take to all of this is, instead of scrutinizing and calling things bullshit, is to encourage folks who want to create but have not the time to dedicate to the art of coding to change their approach slightly. These new tools could be a boon for productivity in this community - no need to gatekeep.

@branch instead of simply prompting ChatGPT with the kitchen sink right away (a full spec of what you want), start simpler. Get it to produce no more than 10-30 lines of code at a time. This will build your knowledge and reduce chance for error - all without you needing to study Lua/general programming for hours, weeks, years, before getting started.

Ask it to produce some code that does nothing interesting, like make an audible tone or draw a simple graphic to the screen. Ask it to do something that is guaranteed to work first. Once you have that, start building upon that foundation and guiding it towards complexity, instead of just expecting it to produce a useable result right off the bat,

ChatGPT IS a tool (those saying it isn’t I would suggest could benefit from changing their approach).
But it’s up to us humans to know the difference between watering our garden with a firehose vs. a nice slow trickle. DO not blame the tool - it is doing what it is designed to do. The onus is still on the human to do the critical thinking. ChatGPT can simply help you explore these worlds quicker.

Hot take, I know. But, I just think there is a place for folks who want to use this technology to contribute to the world of software. This community is smart enough; we can figure out how to empower everyone to contribute.


** EDIT ** - while I appreciate and respect myself for my initial response here I have been educated why this stance could be problematic. This community is built on the backs of a deep reverence for the art of software engineering. This community provides all the necessary resources to get you from knowing nothing, to feeling empowered to create. While I still believe there may be a place for AI chat tools on the learning journey, they should be treated as supplemental to doing the actual work.


I think if we start with assumption that everybody is busy and managing some subset of obligations and ambitions that exclude execution of all things we’re on good footing. There aren’t many useful shortcuts toward the things we want to produce because our belief in the value of investing in them is what inspires others to participate. That said you can do a lot with a little. Scripting for norns has a fairly low cost of entry, and most of the heavy lifting has already been done. It’s also well documented and approachable. Making software is always a creative challenge, just like making music or novels.


Love this post. Thankyou for writing it!


i’m sorry but i really don’t agree with this.

if you don’t want to read my whole crazy rant, here’s the main point i’d make in response:

i haven’t seen any such thing. there is always a chance of the thing pulling up something that is just drastically wrong. this appears to be equally true in every domain of factual knowledge. for example all models of GPT seem to “think” that i was born in 1975 instead of 1981, for no “reason” whatsoever (because reasons don’t exist in the GPT universe, only dice.) i guess a tool like Codex or Copilot that is made for code completion can probably have functionally total corectness, but the chat-based tools really don’t.

in a more apropos example, gpt4 made a totally inexplicable off-by one error in a trivial lua function: “copy the top half of a table to the bottom half, ignoring last element if the size of the table is odd.” it got the size limit wrong, ignoring 1 or 2 elements. a lot of lua newcomers have issues with 1-based indexing so this is not helping anyone.

(that is the basic issue i have with this stuff as an engineer. i also have some personal preferences that are more social and about how i want to interact with collaborators and knowledge-sharers, but i suppose you’re right and i shouldn’t try to “gatekeep” according to such preferences. however, the use of these tools will depress my own willingness to engage with the ecosystem, fwiw.)

lordy, this rant is way too long.

<rant deleted>


Sorry I don’t want to bring the debate too far from the topic but I want to share my reflections on this, hope this isn’t unwelcome/too far out:

Afaik no technology has ever been “neutral”: it’s always been shaped inside a determinate power group and with explicit and implicit objectives in mind. I don’t really think of AI as a neutral tool, because I find the motives behind it to be mostly at odds with what the human being is. We humans haven’t been questioning technological innovations very much in our brief history and especially in this “rush-hour” begining of the 21st century. So I really hope there will be space for the “critical thinking” as mentioned above.


Is a simple “hello world” not guaranteed to work? Look, I am simply encouraging people who do not code, who see this as an opportunity to create something, to slightly alter their approach instead of telling them “this is bullshit and none of it works and yadda yadda”.

That way, perhaps they WILL start to understand the basics, gain confidence they can figure it out without AI and next thing you know we have another member of the community contributing in a meaningful way. Remember, we can all create issues on Github and open PRs if we think a project can be improved.

I just felt like the OP was somewhat ostracized and am simply voicing a perspective that we can work with this, not against it.

Cheers. No further comments from me.


why not just do norns study 1? that is “hello world” written in a way that explains each step— literally a bunch of copy-paste exercises with extended commentary and a sensible flow to facilitate understanding. we literally poured thousands of hours into this ecosystem, with the explicit goal of teaching and helping people “get started”

so for this purpose, autofill is absolutely bullshit. if we warn people of this loudly enough perhaps they won’t waste their time or even worse, decide programming actually sucks, because autofill sucks.


and I am eternally grateful to all who contributed to this. I can say from experience that without having had the massive amount of detailed documentation and help from users I would never have been able to write a script coming from a non-coding background.


hey i am not trying to fight and i think it’s a little extra to call what i’m doing “ostracizing.” i really hope you appreciate some of my points and i do thank you for sharing your perspective even if we disagree.

it’s not! i think it’s likely to work but i’ve seen egregious errors with trivial things, as i described above. and “hello world” as a norns script is less trivial than in some environments. (i’ll try it!)

but tracking down the kinds of errors i’ve seen would be super frustrating for a newcomer. i’ve seen one functional script come out of a LLM driven process, and said process seemed ful of unnecessary pain. (incidentally it also could have been mostly replaced by a single call to an existing norns API.)

but more importantly: we have “hello world” already! i think there’s a disconnect here: if @branch wants to learn to code on norns, we can help. we can help way more effectively then even incremental and careful use of LLM generation. but i think they’re saying they probably won’t really have time for that, and just want to share ideas. i think that is great. we can use this as a workshop topic etc etc.

so anyway i just struggle to see where LLM generation can be used here to good effect. i’m not guessing! i’ve seen it used many times now and only to bad or mediocre effect. in source code generation. as a sort of random text idea generator thing, sure why not? (i used to have an OS9 app likle 25 years ago called “McPoetry,” wonderful - i like digital madlibs as much as the next nerd. but my point here is that i don’t want to waste time reviewing and debugging generated source code, i would hate that.)

(that’s again putting aside some philosophical issues with the whole thing and they way it’s implemented and marketed etc.)


to close the loop: gpt4 creates a slightly broken “hello world” norns script given the prompt: “write a script for monome norns that displays “hello world” on the screen.”

it is almost correct and does show “hello, world,” but unfortunately doesn’t stop there: (1) it also binds one of the keys to a system function that should not be used in scripts and could (say) soft-lock the system menu. (2) it provides installation instructions that are total bulshytt and imply that norns is a USB mass storage device, which is already a source of confusion (it’s a host only.)

moving on now…



my part in this hoo-hah about chatGPT code started from my responses to this post. i’m learning a lot from the resulting discussions, largely about how people are maybe turning to the chatbots because they are better explainers than the norns dev team. maybe this is not really resolvable because bots have unlimited free time and we don’t, and that parameter trumps all others. (i hope not!)

but maybe we can also make some incremental changes in how we make stuff. as an experiment i’ll take this “edward slicerhands” idea as a test case and start a dev diary thread on how i might go about approaching the implementation, from POC to wherever it goes.

i’ve no idea how useful this is really but it seemed that for this kind of thing it’s better to write something more than comments in the source.

in this case i do hope people pose their questions in the thread if they are curious about any specific decisions.


This is a great resolution and appreciated.

1 Like

while i definitely support the decision to move these conversations to a dedicated thread, a note for future travelers: due to being merged from multiple threads, some of these rants and replies have been severely disarranged w/r/t timeline order, and some references no longer make sense (e.g. in some cases above we are discussing handwritten code and not generated code.)


As maybe the world’s #1 user of the thing I feel gladly obliged to let you know the good news that McPoet (if that’s the one you remember) lives on as JanusNode :slight_smile:


hahaa! that’s the one. thank you! (of course it belongs now in the other thread - see? mixed up)

i guess i still think the evident randomness of markov chains is a little more compelling than the uncanny qualities of the LLM stuff for creative text purposes, but maybe that’s just familiarity


I share a lot of feelings in this thread…

  • suspicion and distrust of the corporations behind the new AI tools
  • delight for the folks who find programming accessible for the first time
  • horror at what might happen to communities formed around open, text-based communication now that text synthesis is a thing (for a glimpse into the future, look at Tinder)
  • hope that this might make software development more fun

Personally my one ChatGPT success has been, using it to generate an explanation of existing code. In this case I took a Perl 4-liner that would have taken me a while to decode in my head, and generated a textual description of its function with ChatGPT. Of course I still had to double check the translation was correct, but it did feel easier than starting from scratch.

I’m reading a blog by Simon Willison who is doing a lot of useful and realisitic thinking out loud on LLM-assisted coding. See for example: AI-enhanced development makes me more ambitious with my projects

Large language models exist now, corporations came and took all the stuff we put for free on the internet. Some folk are trying to use local laws to undo that (e.g. in Italy). What do we do to prevent them doing that again? Will online communities like Lines go underground beneath the dark forest and become invite-only?


So you go to ChatGPT and give away your ideas. Think about it.

1 Like

Ideas are cheap. 


Building upon the idea of generating Norns code, I was interested in how much basic SuperCollider does ChatGPT “know” and whether it can help me generate small (think of an oscillator and some basic signal processing) parts of code to replace googling the reference. I’m sorry to say that it failed horribly as it invented non-existing functions, used good functions in the wrong way and used generic functions where there were perfect functions for that particular use case.

I’m already using ChatGPT to create small shellscripts for various task automation, so I was hoping for better results as SuperCollider has more available documentation than Norns, but no luck. Not looking to spark another debate, just documenting my findings so that other people don’t waste their time.