Machine-generated code

this has become a topic which has the potential to invade and derail many other topics. this thread will be devoted to this topic instead.

6 Likes

Could ChatGPT write Teletype scripts?

I don’t have a teletype, so I don’t know if this is right!

1 Like

It is not even a little bit right

10 Likes

20 characters of AI being a shameless liar.

9 Likes

Lol. That’s a good wild guess at the syntax though.

Nevertheless I see Teletype/ChatGPT or other as a powerful text to music generator system but I don’t know anything about ML engineering to train a model on the syntax.

it succeeded in so far as it discerned teletype is a programming language that has tokens separated by spaces. as many of you know, this is true for almost all programming languages, so i’m not going to award it any points for that.

let’s be very clear about just how wrong this is:

  • note is not an operator
  • it seems to assume that midi note names are valid tokens, which they are not
  • it is not aware of the 6 line limit
  • “module’s settings” is vague at best, more just suggesting that its training data made it aware that midi notes normally have a duration and this should be mentioned for legitimacy. the closest approximation of note duration is trigger time, which is not a setting but a realtime variable
  • “sending” a script to teletype again is just vague assumptions about computer-stuff. the two ways to “send” a script are to type it manually, or load it from a usb-stick— neither of which evoke “send”
  • if somehow note was indeed valid, there is suggestion of relation to or awareness of the output capabilities
  • there is no awareness of time. to play this as a “sequence” (rather than an all-at-once cluster) you’d need a timing component involved, like a metro (though there are numerous ways to do this, if you have the actual capability to reason simple problems, which this robot doesn’t)

(if someone wants to post a correct answer to the prompt above that’d be quite amusing)


what’s particularly frustrating about the chatgpt response is the confidence that it exudes— that it’s trying to fool us, like some slimy crypto salesman. for its level of capability, the chatgpt programmers should’ve instilled a copious amount of humility and qualification in the responses of their creation. can you imagine if what it actually said was:

“i’m sorry but i only have a small amount of information about teletype, but here’s my best guess about how its syntax works. it’s probably wrong, so be sure to check an official source.”

24 Likes

don’t bother, and please reconsider the whole proposition

(basically you’ll need to become a capable programmer in order to debug the garbage output)

15 Likes

What @tehn said and also let’s talk about LLM as these are not intelligent. What you describe as “proficient” is just statistically matching output that might propably correlate to your input. Even though gpt4 etc have a high probability to get things right, the fact that weighted Statstics can never match or predict reality 100 percent remains.
LLM combined with static code analysis, compiler checks etc. might get you pretty far. Maybe even in an automated way. But the result will still require auditing. Even if the code would be mathematically verifiable, which is ongoing research, the work needs to be done.

1 Like

thanks for the input - guess I was getting ahead of myself there. interesting article though @tehn. thanks for sharing.

1 Like

i saw the original post at the time but didn’t see the deleted other stuff so maybe this was already addressed but i (unfortunately for my mental health) have to offer some pushback to what i percieve as the consensus here.

i find the left wing critique of this tech to be incoherent. it’s way overhyped, it’s nothing new, it doesn’t really work, it’s not smart, it’s the crypto of current year—but also it’s an incredibly disruptive technology that will fundamentally reshape society for the worse. i’m not saying it can’t be MOST of those things
but i kinda feel like denying the utility of these LLMs full stop is just causing the most vital voices to self select out of the conversation.

i say this with the utmost respect to everyone here who works in software, but it seems as though people who regularly interact with code have a poor grasp of how difficult it is, or at least that it’s difficult for them in a way that is different from “the rest of us”. code overwhelms, not because of the complexity (most people probably don’t make it that deep), or due to learned helplessness, or whatever—but because of the period of time when the problem is solved in your head but you cannot manage to get the solution to be reflected on the screen. you can understand how code works, understand elements, look up the syntax and examine prior art, and still waste months of your life typo hunting. it is far and away the most frustrating thing i am compelled by my hobbies to engage in.

i completely understand and empathize with the plight of educators not wanting to deal with the insanely overwritten or ill advised procedurally generated stolen code that these tools will usher in—maybe a norm will emerge where developing with AI assist tools disqualifies you from asking for help with your code? maybe that’s what we’re really doing here when we respond this way? because otherwise i’m having a hard time seeing this train stopping.

for a real world example, here’s my most recent experience asking for coding help on llllllll. and below is what would have happened if i had thought to type my forum post almost verbatim into bing.

bing conversation

i am trying to send touchĂ© data (outputs CCs 16-19) to my op-1 midi lfo (accepts CCs 1-4) i think i have to use the this template script on the current webpage context to accomplish this. here’s my function: function user_midi_event(id, data) local msg = midi.to_msg(data) – example points to this line to manipulate data, so i add: if msg.type == “cc” then if msg.cc == 17 then msg.cc = 1 print("cc " 
 msg.cc 
 " = " 
 msg.val) end end end the print message outputs the expected behavior, but the cc values don’t seem to actually arrive in the appropriate place. what am i missing?


Searching for: norns midi to_msg

Hello, this is Bing. I can try to help you with your code problem based on some web sources12.
The code you shared is using a template script for passthrough, which is a library that allows MIDI routing between connected devices on norns2. You are trying to send touché data (outputs CCs 16-19) to your op-1 midi lfo (accepts CCs 1-4) by changing the CC number in the user_midi_event function.
The problem with your code is that you are only printing the changed CC number, but not sending it to the target device. You need to use the send method of the midi device object to send MIDI data after modifying it. For example:
function user_midi_event(id, data)
local msg = midi.to_msg(data)
– example points to this line to manipulate data, so i add:
if msg.type == “cc” then
if msg.cc == 17 then
msg.cc = 1
print("cc " 
 msg.cc 
 " = " 
 msg.val)
– send modified data back to device
midi_device[id]:send(midi.to_data(msg))
end
end
end

The midi.to_data function converts a message table back to a byte table that can be sent as MIDI data1. You also need to use midi_device[id] instead of midi_device[target] because id is the port number of the incoming device and target is an option parameter that may not match.
I hope this helps you fix your code problem. :blush:


got it. so i was just thinking about the order of operations wrong? the way i read the example script was that it mods the mod. so i thought in init where it does this: if mod_running then – optional – this informs passthrough about the script-defined callbacks for midi data passthrough.user_event = user_midi_event i would be sending my updated msg.cc value as defined in that first function to the passthrough mod, which would then handle sending as it normally does. instead, when using the example mod script, you’ve got to handle that stuff on your own?


Yes, you are correct. The example mod script is only a template that shows how to use passthrough as a library in your own script. It does not modify the passthrough mod itself. You have to handle the sending of MIDI data on your own after manipulating it in the user_midi_event function. The passthrough mod only handles the routing of MIDI data between devices according to the settings in the system menu. :blush:


i get a more thorough, contextualized answer (even with weird semantic hallucinations? like midi[id] vs midi[target]?). i get new code to test (if and when it errors i can copy/paste the error and have it explained to me in human language and probably get something new to test). my clarifying question trying to comprehend why i was wrong actually gets answered. i don’t waste a Serious Users’ time (always SO appreciative of everyone’s help!! but guilt around this keeps me from asking a lot of questions!). and i get this within like 12 seconds of pressing send instead of the next morning (or never). this is just better.

at the same time, i also understand that the reason it is better is because, much like my computer, my phone, basically every product or service from california that i interact with—shout out palo alto by malcolm harris—it’s incomprehensibly unethical. but it’s going to become increasingly more difficult to get people to engage in the numerous valid and extremely concerning critiques if we insist that autocomplete developing a passible approximation of logic offers no real convenience or improvement over earlier forms of the technology.


@fakeplasticdrake

ALL THAT SAID, you do still need understand code in order to do this (i learned lua through the monome studies). since LLMs are trained on a bunch of stolen code from the internet, their initial suggestions tend to be wildly overkill for the scope of a norns script (“you should really use this super intense library to do [basic function], it is more robust in commercial pipeline and supports windows users” stuff) but if you read code well enough to see when it’s doing that and when it’s making things up (it also loves to invent lua libraries)—at least for me—it seems to speed the process along. i talked a little about my experience using it to build a script over here

there’s this idea in screenwriting of the vomit draft. the hardest part is staring at a blank page, so you want to write something down as quickly as possible and get into revising. that’s all this really does, but when you don’t have enough hours a month to remember if your programing language starts at 0 or 1, etc, it can definitely help. maybe there are material reasons using it is worse than buying a macbook or whatever, but i am not familiar with them at time of writing.


anyway to try to leave this on a more positive note i will throw another left wing critique that wisely highlights a genuinely good thing being done with the technology

In Aotearoa New Zealand, a small non-profit radio station called Te Hiku Media, which broadcasts in the Māori language, decided to address this disparity between the representation of different languages in technology. Its massive archive of more than 20 years of broadcasts, representing a vast range of idioms, colloquialisms and unique phrases, many of them no longer spoken by anyone living, was being digitised, but needed to be transcribed to be of use to language researchers and the Māori community. In response, the radio station decided to train its own speech recognition model, so that it would be able to “listen” to its archive and produce transcriptions.

Over the next few years, Te Hiku Media, using open-source technologies as well as systems it developed in house, achieved the almost impossible: a highly accurate speech recognition system for the Māori language, which was built and owned by its own language community.

[I]t created a model that recognised recorded speech with 86% accuracy – more than enough to get it started transcribing its full archive.

Te Hiku Media’s achievement cleared a path for other indigenous groups to follow, with similar projects now being undertaken by Mohawk peoples in south-eastern Canada and Native Hawaiians. It also established the principle of data sovereignty around indigenous languages, and by extension, other forms of indigenous knowledge. When international for-profit companies started approaching Māori speakers to help build their own models, Te Hiku Media campaigned against these efforts, arguing, “They suppressed our languages and physically beat it out of our grandparents, and now they want to sell our language back to us as a service.”

16 Likes

Thanks so much for this response @ypxkap. You’ve rather eloquently made every point I would have tried to make myself on this topic. The ‘vomit draft’ is quite similar to how I approach composition so having this apply to LLMs resonated pretty solidly with me. Feels like about as apt an approximation of where GPT etc could/would/should sit in a workflow as you’re going to get.

To be honest at first I felt rather despondent by the seemingly dismissive responses here from some of the elder statesmen of lines. Having had time to reflect I can totally see why (for example) the prospect of reviewing code spat out by LLM’s would suck for someone who has invested a lot of time and effort into breathing life into this usually cosy, welcoming corner of the music internets. I wouldn’t exactly be enthused by that prospect either.

Still though - as you’ve mentioned, this train ain’t stopping. Much like deepfakes, this is a powerful, disruptive technology that will only get more powerful and more disruptive as time goes on, regardless of how objectionable the underlying ethics are.

I guess I don’t really have a point to make here that isn’t yours but worse. I just wanted to acknowledge your response and say thanks for taking the time to voice your feelings.

5 Likes

I’m confused by this. I can see how it might help get past a blank page when writing an essay, but I don’t understand how it would do anything to help avoid or resolve errors.

And because of that, I’d argue that a so-called vomit draft is a pretty terrible approach to getting started with a programming task.

Write pseudocode for your first draft. Write it thoughtfully. Have some respect for your work, don’t “vomit” it. Don’t try to run your pseudocode, you need all the right syntax for that. When it’s time to make your pseudocode functional, do it a little bit at a time so you aren’t trying to debug a tangle of errors all at once.

It’s the “little at a time” process that was really helpful to learn in a classroom with a instructor present. Most of programming is debugging and learning how to debug requires thought processes that benefit from guidance in early days.

The hardest part of learning to program is the simple fact that your early efforts are not going to be very impressive. Small things are small. Sometimes you’ll write code that does nothing more than instruct you about how your programming language works. You might have to do that sort of thing for quite a while before it starts to get more interesting. This long slow part is the real learning part though. Best not to try to shortcut it.

As for LLMs, I’m talking to startup folks and trying to find an ethical path through. It’s not looking promising to me! But the more optimistic among us think there may be ways to used secondary models to fine tune things for better accuracy in specific domains. I’m encouraged by these efforts and believe the current crop of tools is going to look pretty sad by comparison in a relatively short period of time. But I have no way of knowing that for sure. I don’t own or rent the petabyte scale storage necessary for training primary models and I’m not enough of a math genius to know the most efficient methods for training secondary ones. I’m just a wary observer. From my vantage point I could also see these fine tuning efforts failing, at which point these startup folks are adrift in the Great Garbage Patch we call the internet.

So I find myself in this strange position of somewhat rooting for some aspects of the AI industry, because if the current mainstream OpenAI/ChatGPT approach remains the dominant one, I believe it is a net loss for humanity. It increases noise, making it harder to make the signal heard.

7 Likes

The critique does not go against the use of LLM like technology in general, but products such as chatgpt specifically. As usual, it is not the tech itself posing the problem, but humans way of (ab)using it.
The way you interact with bing, chatgpt etc. is straight from the playbook of habit forming application design (prompt, result, maybe success, dopamine, repeat). This is in the hand of monopoly seekers is bad enough and we have seen it abused time and time again from Facebook to instant purchase on Amazon. That is only for psychological addiction (which is real enough).
Then there is the danger of physical addiction through lock in. Have everyone join for free, once they depend on it lick it down and make ‘em pay.

I totally agree that there is utility in the tech, we released our first ML product in 2015, without telling anyone that it contains ML for it’s functionality because no one would care. We now have a range of products and all of a sudden, marketing slaps the A word on the label. :man_shrugging:
The reason I tell this is to underline that the tech is useful as well as disruptive. But let’s not repeat the mistakes we all did by buying into proprietary computing to the point that we use black box devices like iPhones for state affairs like tax declaration etc.

My utopian hope is to loose my job because everyone can create open and free software. I will happily make burgers, farm work or childcare once that happens. Becoming a builder sounds nice too. This utopia will not happen with either openAI or any other of the LLM sellers out there.

Edit: maybe a mod wants to move this discussion from procedural art to its own thread.

3 Likes

just for my own sanity i shouldn’t get into a huge back and forth with everybody here who has taught me so much about this kind of thing over the last several years. so i apologize if i duck out of the discussion suddenly, but i’ll try to stay for a few clarifications.

i have one other post in this thread about the first time i tried gpt 3 and how i had a lot of fun with weird generative text and after that first demo, the spell was broken and i completely lost interest. i couldn’t be bothered to try the chat version when it came out at all. to me, chatgpt is a much worse and more sinister product because it presents itself as being so intelligent it has a moral obligation to avoid [politically charged topic x]. so it’s actively misrepresenting its capacities by claiming its goal is to avoid reproducing harmful biases, all the while repeatedly and confidently reproducing slightly more subtly harmful biases (love to have all of my knowledge served with a side of california ideology).

@jasonw22
re: vomit draft, etc.

obviously, i would have loved to have had a dedicated instructor when i was learning, but that wasn’t in the cards for me

here was my most recent process:

  1. i have a task (convert subtitle files to markers in resolve)
  2. i break down the task to what i imagine the elements are in pseudo code
  3. i open the resolve scripting documentation in microsoft edge and query bing: “reference the documentation on this page, here’s what i want to do, here’s how i think it might work” paste the pseudo code
  4. bing says “i see you want to do x y z. that could work but can i suggest [lua library that doesn’t exist], [weird enterprise scale solution i do not need], etc”
  5. i smile and nod at it (because these things get WEIRD if you try to argue with them when they’re hallucinating) and then say “no let’s stick with the original plan”
  6. bing generates working functions from my pseudocode, but my pseudo code doesn’t actually work.
  7. i open my script.lua in microsoft edge and bing treats it as a web page.
  8. trouble shooting via trial and error “do you think the problem is with this function” “can you add print statements to this whole script” etc (functions in lua have to be defined before they’re called—notably not how the norns works! i had no idea)
  9. i get to a version that works with one srt file much more quickly than i would have been able to on my own.
  10. i ask bing about converting this to process multiple files at once and it gives me some steps. this is the kind of task where not being able to code super often really slows me down, i know i have to change all my single variables into tables or a for loop for each file or something but i don’t use this part of my brain often enough to remember best practices.
  11. ask it about the first step it suggests and say “ok let’s do this step, for that i would do xyz, is that a good approach” and it says “yes that could work, it would look something like this”
  12. when the program is finished i use chatgpt, paste the working version 1 code from step 9 and ask it to rewrite the code minified so that it is short enough to paste into bing
  13. i ask bing “look at this known-good minified code and compare it against the code on this page which should function the same but works with multiple files, are there any redundancies in the code on the page” and it identifies four redundancies. i fix these redundancies and the code works.
  14. i have now reproduced the simonsaysai transcription service ($15 per hour of audio transcribed) with my own code in one day (evilcorp already released a free audio to srt generator app), something that i have tried and failed to do multiple times since 2018 (my first attempt involved uploading video files to a private youtube link to download the autogen srts lol)

maybe i am much better at coding than i believe i am and this tech didn’t actually do anything i couldn’t already do within my significant time constraints, but that is definitely not how i experience it!

again i understand and agree with almost all of the negative aspects of this i am not focusing on right now that others are highlighting. my issue is that often the critique of the ideological priors of these companies gets mixed in with a heavy dose of twitter “and it doesn’t actually work that well” dunks.

my experience is that if you spend a bit of time with it, the tech is self evidently useful. this is precisely what makes it so dangerous! if people cared about information services not being capable of producing reliably accurate information, wikipedia would have been a passing fad too.

the issue is that the tech is distributed to the public by companies who are not democratically accountable and who will never voluntarily submit to democratic control. to me, almost every other discussion around whether it works or is appropriate to use in x field just serves to distract people from this, which is ultimately what benefits these companies the most.

10 Likes

thanks for the pushback. i was thinking about your story a lot already of course.

that actually makes sense to me.

agreed . while there are creative elements to writing code, it’s ultimately a craft like carpentry. the joints fit or they don’t. even “agile” and “extreme” approaches to getting to POC/MVP super quick, don’t encourage initial drafts that don’t work as intended. (instead they focus on minimizing initial scope.)

and,

exactly. these tools encourage a trial-and-error approach to development due to the way they are designed. this is actually a classic beginner trap
 some T-E is unavoidable, but minimizing it in favor of tighter design is a major step towards real proficiency. it’s an applicable lesson whether you are working with LLM or just with yourself, but LLM has all these other issues


i’ve written an embarassing amount already on this but maybe this central point has been obscured: in LLM-generated code, any kind of error is possible, anywhere, at any time.

just think about what a nightmare that is for a code reviewer. usually LLM code has errors that no human would ever make, and therefore wouldn’t think to look for. (“design” and “architecture” errors. even small functions have them.)

LLM code also very often has fundamental arithmetic and logical errors. like allocating a table with 64 elements and then treating it like it has 63 or 65. these errors are random and not easy to spot. this is a fundamental limitation of using LLMs to do math.

now to speak to one of your other points, that doesn’t mean that AI coding engines aren’t possible. they exist (Codex, Copilot etc) and i’m sure a lot of work is being done to integrate LLM with formal static analysis in programming environments where this is feasible (probably not norns, lol) to produce provably correct programs.

that’s not what we’re looking at here though.

like @ypxkap your end result script looks ok to me. it’s pretty simple. but it has some pretty big issues too. i just feel a little crazy even looking at it long enough to determine that, let alone pointing out these “design errors” that were made by a statistical process. it feels like a waste of my life, you know?

but just this once, here are a couple issues:

  • i don’t think a human wouldn’t design it with a separate note_channels array. the program doesn’t guarantee that the note num and channel are placed in the same index in the respective arrays; it happens to work now but if you (say) want to implement a channel offset, or any other manipulation of the history, then things will get ugly pretty fast, and it would be easy to introduce a bug where those tables are out of sync. it should pretty clearly be a table of pairs: { {num, chan}, {num, chan}, ...}.)

  • the treatment of midi vports is not robust. it attempts to go beyond the remit of the program by building a list of device names, but it doesn’t know that the name list it build for the device parameter on lines 23-24 may not match the device names on line 88 because the user may have gone into the menus and reassigned vports at any time. (the convention in scripts is to just select vport indices and remain agnostic of the physical devices, which would have made this program less complicated and more correct.)

additionally:

this process has resulted in a misunderstanding: all the standard script functions on norns are already defined (as no-op placeholders if nothing else) and the script is re-defining them. you can see this by poking around in the norns system lua, and you can inject your own functionality into some of these base definitions or subsequent wrappers (its how mods work.)

the chatbot has led you to a wrong conclusion, with limited immediate consequence but creating an obstacle to deeper understanding, and fostering the concept of the software environment as something almost mystically abstruse and unknowable.


i don’t understand this method of learning, and to me it doesn’t actually look like “learning,” but ultimately its not my place to say if it works for you or not.

it is my place to decide whether i want to spend my time as an ML validating device. and i don’t. i’d rather talk to people or just write code for them. (because i enjoy designing, writing, and solving problems with code.)


there’s another important point you made: you didn’t get any help with your problem when posting on the passthrough page.

i get that, sounds frustrating. but forum’s are slow and often have low-priority engagement. i think the answer is putting more effort into live environments like discord and workshops. this is sort of a balancing act for developer time but i think we can do better than we have done.

like, i would like to do more regular workshops and engage on that level. but i’d also like to put more time into core engineering and into other projects. burnout is real. maybe things like sponsorship can help
 and i’ll leave that for qanother time/thread.


finally,

sign me up too. i’m not just riding an ideological high horse here - i am literally looking at where the chatGPT tools are today by experimenting with them, and they aren’t in what i consider a helpful place at all, at least for the kind of problems i solve with code, and not AFAICT for education. (but i’ll concede that i’m not really an educator.)

i do think that ML tools will soon make parts of my job obsolete. not LLM tools though. i’m thinking more of contexts where correctness is probvable and the problem domain is well defined, like in Faust which is a functionally pure DSP graph language.

it is possbile to actually build an error function for this kind of thing, where the ML component can actually test its own program. to me that is a non-negotiable requirement for using ML in engineering. (and it’s rather obvious, i mean we’ve actually been doing this kind of thing at some scale for a long time, its just the chat stuff that has really broken out and spilled over into engineering now.)


also agreed. i’m choosing to speak to pragmatic issues primarily, but the other issues are really more important (i just think they can be better articulated by others and they also require some political common ground.)

16 Likes

I’m thinking about how I use this concept in my own professional software career, and in my scripting in my own time.

I use it so much — but not in a way an ai would be much help with.

My first draft, my goal is to get it basically working, and I’m doing so learn where my initial assumptions about the structure to build were wrong. I’m testing my own assumptions here; which the AI doesn’t share. At some point it either works, or I know enough about why my assumptions were wrong. Either way, I will usually completely rewrite it. My assumptions are never right enough at the beginning for clean well-structured code.

Then I often do it again. Usually on the third time it starts feeling roughly right.

11 Likes

ok last reply on this for me. SO sorry you looked at the midi channel code lol. that specific one was my first try and really just curiosity seeing if it was actually possible, so i did literally no human refinement on it. i did clock those areas where it looks insane, but wanted to present it anyway because i honestly was not expecting it to work at all (the same way bing works for everything else). i have since done the reading and seen 500 pieces of content doing similar things and probably wouldn’t have bothered if i had seen them first.

but that in turn reminds me of the discourse on here about “general purpose computing”, spaghetti code, “it’s ok to make a mess if you are just doing it for your own stuff”, etc. which i personally find very empowering. and i carry the awareness that i’m doing things “the wrong way” when i engage with LLMs for this.

of course the results are not going to be robust, maintainable, secure long term, will break unnecessarily in weird ways, etc. but i am not, nor am i aspiring to be, a software engineer.

so i personally see a lot of overlap between the conceptual framework of (what i view as) the monome ethos of demystifying and making code approachable etc, and the use of LLMs to assist you in that task. i understand ideologically they are completely different and incompatible. but i think it’s interesting that we talk about the freedom of writing bad point a to b code all the time in other threads, but when i talk about using a computer to help generate the bad code here, a bunch of people are talking to me about how my solutions aren’t robust and i am robbing myself of a joy that can only be found by the deep understanding of codebase, preferably under the watchful eye of a teacher.

like, i assume this is true (i trust you over bing lol). but even you trying to explain this to me, i have no idea what this means, why it’s different from what i said, or why it matters.

which is fine! this isn’t a personal issue for me. like, i use personal examples because i feel like i have the “credibility” here to invite the debate instead of the initial reflexive dismissal. but for example, i wasn’t frustrated by a slow forum response, i was guilty i had to ask at all—very trad only child “must be self sufficient” baggage in my head. for people with related baggage, i think many eventually will turn to whatever imperfect solutions on the marketplace there are.

i don’t know, but i strongly suspect that this is going to be more similar to the facebookification of the web than it will be to NFTs or whatever because a) it’s already backed by legacy institutional support in a way blockchain companies never really had until much much later in the game and b) it’s actually possible for normal people to use without them having to think about any of the horrible implications of using it. i think it’s worth thinking about how we’re going to engage with that as it happens, but i don’t know what the answer is.

anyway thanks for reading along everyone and i will continue following the discussion with interest

7 Likes

cool, sorry about that and thanks for the feedback. i’ll try again (possiibly failing again)

like for example redraw is always defined, here:
[ norns/script.lua at main · monome/norns · GitHub ]

all these redraw and init and engine things are global variables that are defined before the script is run, and are expected to be re-defined as new functions.

i guess i just think knowing that is better than assuming that norns lua mysteriously works by different rules than other lua. when you see a global variable, you can assume it’s defined somewhere, and if you search for its definition in the norns sources you can learn about how it’s used and infer how it should be re-defined if needed.

i am sorry that i am not a better explainer of things. it probably isn’t my strong suit. as it happens i am thinking about offering some formal workshops in the near future, but they will be “intermediate” level - i don’t think i excel at introductions.

i mean
 yeah i feel seen. it’s not a totally rational response. but i would be infinitely happier to engage with code that you wrote from scratch, no matter how weird or difficult it is:

  • because we both have the same kind of skin in the game, i’m not wondering whether it took you 12 hours or 12 seconds to get to this point.

  • because i can read that code and get some understanding of your thought process! the errors can reveal misconceptions or things that could be shown better in our docs, or whatever. wheras with the generated code i don’t know which problems come from you (design/concept) and which come from the generator. at minimum i would have to pore over the history of your chat to get to the same level of understanding.

  • because as i’ve said, all the generated code i’ve seen / created with chatGPT has done weird stuff that a human wouldn’t do. it’s tiring even to look for those errors when there is no “train of thought” at all.

i mean, i know you see how this might be a crummy proposition so i won’t belabor it more.


a little tangential to the topic, relevant to this sub-thread: my part in the chatGPT hoo-hah started from my responses to this post. i’m learning a lot from the resulting discussions, largely about how people are maybe turning to the chatbots because they are better explainers than the norns dev team. maybe this is not really resolvable because bots have unlimited free time and we don’t and that resource trumps all others. (yikes.)

but maybe we can also make some incremental changes in how we demonstrate stuff. as an experiment i’ll take this “edward slicerhands” idea as a test case and start a dev diary thread on how i might go about approaching the implementation, from POC to wherever it goes.

12 Likes

This was a somewhat random late night impulse and I admit that I’ve not checked that the response is in anyway accurate
 but I’m once again stunned by the obvious fact that this model has been trained on all things public, and will present itself as capable of solving prompts for the teletype. Human input in quotes, bot responses in code blocks.

do you know what the Monome Teletype is and can you help me create scripts for it?

Yes, I'm familiar with the Monome Teletype! The Monome Teletype is a powerful scripting platform that allows you to control modular synthesizers and other devices using a text-based interface. I can definitely help you create scripts for it!

To get started, can you tell me a bit more about what you're trying to do with your Teletype? Are you trying to control a specific device or create a particular kind of sound? The more information you can give me, the better I'll be able to help you create the scripts you need.

I want a script that plays a new note for a pentatonic scale on each new trigger, please write this for me

Sure! Here's a script that will play a new note from a pentatonic scale each time it receives a trigger:
#PENTATONIC TRIGGERED NOTES
M 1

#List of Pentatonic Scale notes
I P.N 0 2 4 7 9

#Trigger Input
TR.P 1

#Script to play notes on output 1
SCRIPT 1:
  L 1 5: CV 1 I P.N RAND 5: DEL 20
In this script, we use M 1 to set the script's metro to a default value of 1ms. I P.N creates a list of pentatonic scale notes. TR.P 1 waits for a trigger input on input 1. SCRIPT 1: defines a script that runs on output 1. L 1 5: sets up a loop that runs five times. CV 1 I P.N RAND 5: sets the CV output to a random note from the pentatonic scale list. DEL 20 sets a delay of 20ms between each note.

You can adjust the values to suit your needs. Let me know if you have any questions!

This is presented without any comment or criticism of the underlying paradigm
 I honestly can’t fully process any of that myself yet