this has become a topic which has the potential to invade and derail many other topics. this thread will be devoted to this topic instead.
It is not even a little bit right
20 characters of AI being a shameless liar.
Lol. Thatâs a good wild guess at the syntax though.
Nevertheless I see Teletype/ChatGPT or other as a powerful text to music generator system but I donât know anything about ML engineering to train a model on the syntax.
it succeeded in so far as it discerned teletype is a programming language that has tokens separated by spaces. as many of you know, this is true for almost all programming languages, so iâm not going to award it any points for that.
letâs be very clear about just how wrong this is:
-
note
is not an operator - it seems to assume that midi note names are valid tokens, which they are not
- it is not aware of the 6 line limit
- âmoduleâs settingsâ is vague at best, more just suggesting that its training data made it aware that midi notes normally have a duration and this should be mentioned for legitimacy. the closest approximation of note duration is trigger time, which is not a setting but a realtime variable
- âsendingâ a script to teletype again is just vague assumptions about computer-stuff. the two ways to âsendâ a script are to type it manually, or load it from a usb-stickâ neither of which evoke âsendâ
- if somehow
note
was indeed valid, there is suggestion of relation to or awareness of the output capabilities - there is no awareness of time. to play this as a âsequenceâ (rather than an all-at-once cluster) youâd need a timing component involved, like a metro (though there are numerous ways to do this, if you have the actual capability to reason simple problems, which this robot doesnât)
(if someone wants to post a correct answer to the prompt above thatâd be quite amusing)
whatâs particularly frustrating about the chatgpt response is the confidence that it exudesâ that itâs trying to fool us, like some slimy crypto salesman. for its level of capability, the chatgpt programmers shouldâve instilled a copious amount of humility and qualification in the responses of their creation. can you imagine if what it actually said was:
âiâm sorry but i only have a small amount of information about teletype, but hereâs my best guess about how its syntax works. itâs probably wrong, so be sure to check an official source.â
donât bother, and please reconsider the whole proposition
(basically youâll need to become a capable programmer in order to debug the garbage output)
What @tehn said and also letâs talk about LLM as these are not intelligent. What you describe as âproficientâ is just statistically matching output that might propably correlate to your input. Even though gpt4 etc have a high probability to get things right, the fact that weighted Statstics can never match or predict reality 100 percent remains.
LLM combined with static code analysis, compiler checks etc. might get you pretty far. Maybe even in an automated way. But the result will still require auditing. Even if the code would be mathematically verifiable, which is ongoing research, the work needs to be done.
thanks for the input - guess I was getting ahead of myself there. interesting article though @tehn. thanks for sharing.
i saw the original post at the time but didnât see the deleted other stuff so maybe this was already addressed but i (unfortunately for my mental health) have to offer some pushback to what i percieve as the consensus here.
i find the left wing critique of this tech to be incoherent. itâs way overhyped, itâs nothing new, it doesnât really work, itâs not smart, itâs the crypto of current yearâbut also itâs an incredibly disruptive technology that will fundamentally reshape society for the worse. iâm not saying it canât be MOST of those thingsâŠbut i kinda feel like denying the utility of these LLMs full stop is just causing the most vital voices to self select out of the conversation.
i say this with the utmost respect to everyone here who works in software, but it seems as though people who regularly interact with code have a poor grasp of how difficult it is, or at least that itâs difficult for them in a way that is different from âthe rest of usâ. code overwhelms, not because of the complexity (most people probably donât make it that deep), or due to learned helplessness, or whateverâbut because of the period of time when the problem is solved in your head but you cannot manage to get the solution to be reflected on the screen. you can understand how code works, understand elements, look up the syntax and examine prior art, and still waste months of your life typo hunting. it is far and away the most frustrating thing i am compelled by my hobbies to engage in.
i completely understand and empathize with the plight of educators not wanting to deal with the insanely overwritten or ill advised procedurally generated stolen code that these tools will usher inâmaybe a norm will emerge where developing with AI assist tools disqualifies you from asking for help with your code? maybe thatâs what weâre really doing here when we respond this way? because otherwise iâm having a hard time seeing this train stopping.
for a real world example, hereâs my most recent experience asking for coding help on llllllll. and below is what would have happened if i had thought to type my forum post almost verbatim into bing.
bing conversation
i am trying to send touchĂ© data (outputs CCs 16-19) to my op-1 midi lfo (accepts CCs 1-4) i think i have to use the this template script on the current webpage context to accomplish this. hereâs my function: function user_midi_event(id, data) local msg = midi.to_msg(data) â example points to this line to manipulate data, so i add: if msg.type == âccâ then if msg.cc == 17 then msg.cc = 1 print("cc " ⊠msg.cc ⊠" = " ⊠msg.val) end end end the print message outputs the expected behavior, but the cc values donât seem to actually arrive in the appropriate place. what am i missing?
Searching for: norns midi to_msg
Hello, this is Bing. I can try to help you with your code problem based on some web sources12.
The code you shared is using a template script for passthrough, which is a library that allows MIDI routing between connected devices on norns2. You are trying to send touché data (outputs CCs 16-19) to your op-1 midi lfo (accepts CCs 1-4) by changing the CC number in the user_midi_event function.
The problem with your code is that you are only printing the changed CC number, but not sending it to the target device. You need to use the send method of the midi device object to send MIDI data after modifying it. For example:
function user_midi_event(id, data)
local msg = midi.to_msg(data)
â example points to this line to manipulate data, so i add:
if msg.type == âccâ then
if msg.cc == 17 then
msg.cc = 1
print("cc " ⊠msg.cc ⊠" = " ⊠msg.val)
â send modified data back to device
midi_device[id]:send(midi.to_data(msg))
end
end
end
The midi.to_data function converts a message table back to a byte table that can be sent as MIDI data1. You also need to use midi_device[id] instead of midi_device[target] because id is the port number of the incoming device and target is an option parameter that may not match.
I hope this helps you fix your code problem.
got it. so i was just thinking about the order of operations wrong? the way i read the example script was that it mods the mod. so i thought in init where it does this: if mod_running then â optional â this informs passthrough about the script-defined callbacks for midi data passthrough.user_event = user_midi_event i would be sending my updated msg.cc value as defined in that first function to the passthrough mod, which would then handle sending as it normally does. instead, when using the example mod script, youâve got to handle that stuff on your own?
Yes, you are correct. The example mod script is only a template that shows how to use passthrough as a library in your own script. It does not modify the passthrough mod itself. You have to handle the sending of MIDI data on your own after manipulating it in the user_midi_event function. The passthrough mod only handles the routing of MIDI data between devices according to the settings in the system menu.
i get a more thorough, contextualized answer (even with weird semantic hallucinations? like midi[id] vs midi[target]?). i get new code to test (if and when it errors i can copy/paste the error and have it explained to me in human language and probably get something new to test). my clarifying question trying to comprehend why i was wrong actually gets answered. i donât waste a Serious Usersâ time (always SO appreciative of everyoneâs help!! but guilt around this keeps me from asking a lot of questions!). and i get this within like 12 seconds of pressing send instead of the next morning (or never). this is just better.
at the same time, i also understand that the reason it is better is because, much like my computer, my phone, basically every product or service from california that i interact withâshout out palo alto by malcolm harrisâitâs incomprehensibly unethical. but itâs going to become increasingly more difficult to get people to engage in the numerous valid and extremely concerning critiques if we insist that autocomplete developing a passible approximation of logic offers no real convenience or improvement over earlier forms of the technology.
ALL THAT SAID, you do still need understand code in order to do this (i learned lua through the monome studies). since LLMs are trained on a bunch of stolen code from the internet, their initial suggestions tend to be wildly overkill for the scope of a norns script (âyou should really use this super intense library to do [basic function], it is more robust in commercial pipeline and supports windows usersâ stuff) but if you read code well enough to see when itâs doing that and when itâs making things up (it also loves to invent lua libraries)âat least for meâit seems to speed the process along. i talked a little about my experience using it to build a script over here
thereâs this idea in screenwriting of the vomit draft. the hardest part is staring at a blank page, so you want to write something down as quickly as possible and get into revising. thatâs all this really does, but when you donât have enough hours a month to remember if your programing language starts at 0 or 1, etc, it can definitely help. maybe there are material reasons using it is worse than buying a macbook or whatever, but i am not familiar with them at time of writing.
anyway to try to leave this on a more positive note i will throw another left wing critique that wisely highlights a genuinely good thing being done with the technology
In Aotearoa New Zealand, a small non-profit radio station called Te Hiku Media, which broadcasts in the MÄori language, decided to address this disparity between the representation of different languages in technology. Its massive archive of more than 20 years of broadcasts, representing a vast range of idioms, colloquialisms and unique phrases, many of them no longer spoken by anyone living, was being digitised, but needed to be transcribed to be of use to language researchers and the MÄori community. In response, the radio station decided to train its own speech recognition model, so that it would be able to âlistenâ to its archive and produce transcriptions.
Over the next few years, Te Hiku Media, using open-source technologies as well as systems it developed in house, achieved the almost impossible: a highly accurate speech recognition system for the MÄori language, which was built and owned by its own language community.
[I]t created a model that recognised recorded speech with 86% accuracy â more than enough to get it started transcribing its full archive.
Te Hiku Mediaâs achievement cleared a path for other indigenous groups to follow, with similar projects now being undertaken by Mohawk peoples in south-eastern Canada and Native Hawaiians. It also established the principle of data sovereignty around indigenous languages, and by extension, other forms of indigenous knowledge. When international for-profit companies started approaching MÄori speakers to help build their own models, Te Hiku Media campaigned against these efforts, arguing, âThey suppressed our languages and physically beat it out of our grandparents, and now they want to sell our language back to us as a service.â
Thanks so much for this response @ypxkap. Youâve rather eloquently made every point I would have tried to make myself on this topic. The âvomit draftâ is quite similar to how I approach composition so having this apply to LLMs resonated pretty solidly with me. Feels like about as apt an approximation of where GPT etc could/would/should sit in a workflow as youâre going to get.
To be honest at first I felt rather despondent by the seemingly dismissive responses here from some of the elder statesmen of lines. Having had time to reflect I can totally see why (for example) the prospect of reviewing code spat out by LLMâs would suck for someone who has invested a lot of time and effort into breathing life into this usually cosy, welcoming corner of the music internets. I wouldnât exactly be enthused by that prospect either.
Still though - as youâve mentioned, this train ainât stopping. Much like deepfakes, this is a powerful, disruptive technology that will only get more powerful and more disruptive as time goes on, regardless of how objectionable the underlying ethics are.
I guess I donât really have a point to make here that isnât yours but worse. I just wanted to acknowledge your response and say thanks for taking the time to voice your feelings.
Iâm confused by this. I can see how it might help get past a blank page when writing an essay, but I donât understand how it would do anything to help avoid or resolve errors.
And because of that, Iâd argue that a so-called vomit draft is a pretty terrible approach to getting started with a programming task.
Write pseudocode for your first draft. Write it thoughtfully. Have some respect for your work, donât âvomitâ it. Donât try to run your pseudocode, you need all the right syntax for that. When itâs time to make your pseudocode functional, do it a little bit at a time so you arenât trying to debug a tangle of errors all at once.
Itâs the âlittle at a timeâ process that was really helpful to learn in a classroom with a instructor present. Most of programming is debugging and learning how to debug requires thought processes that benefit from guidance in early days.
The hardest part of learning to program is the simple fact that your early efforts are not going to be very impressive. Small things are small. Sometimes youâll write code that does nothing more than instruct you about how your programming language works. You might have to do that sort of thing for quite a while before it starts to get more interesting. This long slow part is the real learning part though. Best not to try to shortcut it.
As for LLMs, Iâm talking to startup folks and trying to find an ethical path through. Itâs not looking promising to me! But the more optimistic among us think there may be ways to used secondary models to fine tune things for better accuracy in specific domains. Iâm encouraged by these efforts and believe the current crop of tools is going to look pretty sad by comparison in a relatively short period of time. But I have no way of knowing that for sure. I donât own or rent the petabyte scale storage necessary for training primary models and Iâm not enough of a math genius to know the most efficient methods for training secondary ones. Iâm just a wary observer. From my vantage point I could also see these fine tuning efforts failing, at which point these startup folks are adrift in the Great Garbage Patch we call the internet.
So I find myself in this strange position of somewhat rooting for some aspects of the AI industry, because if the current mainstream OpenAI/ChatGPT approach remains the dominant one, I believe it is a net loss for humanity. It increases noise, making it harder to make the signal heard.
The critique does not go against the use of LLM like technology in general, but products such as chatgpt specifically. As usual, it is not the tech itself posing the problem, but humans way of (ab)using it.
The way you interact with bing, chatgpt etc. is straight from the playbook of habit forming application design (prompt, result, maybe success, dopamine, repeat). This is in the hand of monopoly seekers is bad enough and we have seen it abused time and time again from Facebook to instant purchase on Amazon. That is only for psychological addiction (which is real enough).
Then there is the danger of physical addiction through lock in. Have everyone join for free, once they depend on it lick it down and make âem pay.
I totally agree that there is utility in the tech, we released our first ML product in 2015, without telling anyone that it contains ML for itâs functionality because no one would care. We now have a range of products and all of a sudden, marketing slaps the A word on the label.
The reason I tell this is to underline that the tech is useful as well as disruptive. But letâs not repeat the mistakes we all did by buying into proprietary computing to the point that we use black box devices like iPhones for state affairs like tax declaration etc.
My utopian hope is to loose my job because everyone can create open and free software. I will happily make burgers, farm work or childcare once that happens. Becoming a builder sounds nice too. This utopia will not happen with either openAI or any other of the LLM sellers out there.
Edit: maybe a mod wants to move this discussion from procedural art to its own thread.
just for my own sanity i shouldnât get into a huge back and forth with everybody here who has taught me so much about this kind of thing over the last several years. so i apologize if i duck out of the discussion suddenly, but iâll try to stay for a few clarifications.
i have one other post in this thread about the first time i tried gpt 3 and how i had a lot of fun with weird generative text and after that first demo, the spell was broken and i completely lost interest. i couldnât be bothered to try the chat version when it came out at all. to me, chatgpt is a much worse and more sinister product because it presents itself as being so intelligent it has a moral obligation to avoid [politically charged topic x]. so itâs actively misrepresenting its capacities by claiming its goal is to avoid reproducing harmful biases, all the while repeatedly and confidently reproducing slightly more subtly harmful biases (love to have all of my knowledge served with a side of california ideology).
@jasonw22
re: vomit draft, etc.
obviously, i would have loved to have had a dedicated instructor when i was learning, but that wasnât in the cards for me
here was my most recent process:
- i have a task (convert subtitle files to markers in resolve)
- i break down the task to what i imagine the elements are in pseudo code
- i open the resolve scripting documentation in microsoft edge and query bing: âreference the documentation on this page, hereâs what i want to do, hereâs how i think it might workâ paste the pseudo code
- bing says âi see you want to do x y z. that could work but can i suggest [lua library that doesnât exist], [weird enterprise scale solution i do not need], etcâ
- i smile and nod at it (because these things get WEIRD if you try to argue with them when theyâre hallucinating) and then say âno letâs stick with the original planâ
- bing generates working functions from my pseudocode, but my pseudo code doesnât actually work.
- i open my script.lua in microsoft edge and bing treats it as a web page.
- trouble shooting via trial and error âdo you think the problem is with this functionâ âcan you add print statements to this whole scriptâ etc (functions in lua have to be defined before theyâre calledânotably not how the norns works! i had no idea)
- i get to a version that works with one srt file much more quickly than i would have been able to on my own.
- i ask bing about converting this to process multiple files at once and it gives me some steps. this is the kind of task where not being able to code super often really slows me down, i know i have to change all my single variables into tables or a for loop for each file or something but i donât use this part of my brain often enough to remember best practices.
- ask it about the first step it suggests and say âok letâs do this step, for that i would do xyz, is that a good approachâ and it says âyes that could work, it would look something like thisâ
- when the program is finished i use chatgpt, paste the working version 1 code from step 9 and ask it to rewrite the code minified so that it is short enough to paste into bing
- i ask bing âlook at this known-good minified code and compare it against the code on this page which should function the same but works with multiple files, are there any redundancies in the code on the pageâ and it identifies four redundancies. i fix these redundancies and the code works.
- i have now reproduced the simonsaysai transcription service ($15 per hour of audio transcribed) with my own code in one day (evilcorp already released a free audio to srt generator app), something that i have tried and failed to do multiple times since 2018 (my first attempt involved uploading video files to a private youtube link to download the autogen srts lol)
maybe i am much better at coding than i believe i am and this tech didnât actually do anything i couldnât already do within my significant time constraints, but that is definitely not how i experience it!
again i understand and agree with almost all of the negative aspects of this i am not focusing on right now that others are highlighting. my issue is that often the critique of the ideological priors of these companies gets mixed in with a heavy dose of twitter âand it doesnât actually work that wellâ dunks.
my experience is that if you spend a bit of time with it, the tech is self evidently useful. this is precisely what makes it so dangerous! if people cared about information services not being capable of producing reliably accurate information, wikipedia would have been a passing fad too.
the issue is that the tech is distributed to the public by companies who are not democratically accountable and who will never voluntarily submit to democratic control. to me, almost every other discussion around whether it works or is appropriate to use in x field just serves to distract people from this, which is ultimately what benefits these companies the most.
thanks for the pushback. i was thinking about your story a lot already of course.
that actually makes sense to me.
agreed . while there are creative elements to writing code, itâs ultimately a craft like carpentry. the joints fit or they donât. even âagileâ and âextremeâ approaches to getting to POC/MVP super quick, donât encourage initial drafts that donât work as intended. (instead they focus on minimizing initial scope.)
and,
exactly. these tools encourage a trial-and-error approach to development due to the way they are designed. this is actually a classic beginner trap⊠some T-E is unavoidable, but minimizing it in favor of tighter design is a major step towards real proficiency. itâs an applicable lesson whether you are working with LLM or just with yourself, but LLM has all these other issuesâŠ
iâve written an embarassing amount already on this but maybe this central point has been obscured: in LLM-generated code, any kind of error is possible, anywhere, at any time.
just think about what a nightmare that is for a code reviewer. usually LLM code has errors that no human would ever make, and therefore wouldnât think to look for. (âdesignâ and âarchitectureâ errors. even small functions have them.)
LLM code also very often has fundamental arithmetic and logical errors. like allocating a table with 64 elements and then treating it like it has 63 or 65. these errors are random and not easy to spot. this is a fundamental limitation of using LLMs to do math.
now to speak to one of your other points, that doesnât mean that AI coding engines arenât possible. they exist (Codex, Copilot etc) and iâm sure a lot of work is being done to integrate LLM with formal static analysis in programming environments where this is feasible (probably not norns, lol) to produce provably correct programs.
thatâs not what weâre looking at here though.
like @ypxkap your end result script looks ok to me. itâs pretty simple. but it has some pretty big issues too. i just feel a little crazy even looking at it long enough to determine that, let alone pointing out these âdesign errorsâ that were made by a statistical process. it feels like a waste of my life, you know?
but just this once, here are a couple issues:
-
i donât think a human wouldnât design it with a separate
note_channels
array. the program doesnât guarantee that the note num and channel are placed in the same index in the respective arrays; it happens to work now but if you (say) want to implement a channel offset, or any other manipulation of the history, then things will get ugly pretty fast, and it would be easy to introduce a bug where those tables are out of sync. it should pretty clearly be a table of pairs:{ {num, chan}, {num, chan}, ...}
.) -
the treatment of midi vports is not robust. it attempts to go beyond the remit of the program by building a list of device names, but it doesnât know that the name list it build for the device parameter on lines 23-24 may not match the device names on line 88 because the user may have gone into the menus and reassigned vports at any time. (the convention in scripts is to just select vport indices and remain agnostic of the physical devices, which would have made this program less complicated and more correct.)
additionally:
this process has resulted in a misunderstanding: all the standard script functions on norns are already defined (as no-op placeholders if nothing else) and the script is re-defining them. you can see this by poking around in the norns system lua, and you can inject your own functionality into some of these base definitions or subsequent wrappers (its how mods work.)
the chatbot has led you to a wrong conclusion, with limited immediate consequence but creating an obstacle to deeper understanding, and fostering the concept of the software environment as something almost mystically abstruse and unknowable.
i donât understand this method of learning, and to me it doesnât actually look like âlearning,â but ultimately its not my place to say if it works for you or not.
it is my place to decide whether i want to spend my time as an ML validating device. and i donât. iâd rather talk to people or just write code for them. (because i enjoy designing, writing, and solving problems with code.)
thereâs another important point you made: you didnât get any help with your problem when posting on the passthrough
page.
i get that, sounds frustrating. but forumâs are slow and often have low-priority engagement. i think the answer is putting more effort into live environments like discord and workshops. this is sort of a balancing act for developer time but i think we can do better than we have done.
like, i would like to do more regular workshops and engage on that level. but iâd also like to put more time into core engineering and into other projects. burnout is real. maybe things like sponsorship can help⊠and iâll leave that for qanother time/thread.
finally,
sign me up too. iâm not just riding an ideological high horse here - i am literally looking at where the chatGPT tools are today by experimenting with them, and they arenât in what i consider a helpful place at all, at least for the kind of problems i solve with code, and not AFAICT for education. (but iâll concede that iâm not really an educator.)
i do think that ML tools will soon make parts of my job obsolete. not LLM tools though. iâm thinking more of contexts where correctness is probvable and the problem domain is well defined, like in Faust which is a functionally pure DSP graph language.
it is possbile to actually build an error function for this kind of thing, where the ML component can actually test its own program. to me that is a non-negotiable requirement for using ML in engineering. (and itâs rather obvious, i mean weâve actually been doing this kind of thing at some scale for a long time, its just the chat stuff that has really broken out and spilled over into engineering now.)
also agreed. iâm choosing to speak to pragmatic issues primarily, but the other issues are really more important (i just think they can be better articulated by others and they also require some political common ground.)
Iâm thinking about how I use this concept in my own professional software career, and in my scripting in my own time.
I use it so much â but not in a way an ai would be much help with.
My first draft, my goal is to get it basically working, and Iâm doing so learn where my initial assumptions about the structure to build were wrong. Iâm testing my own assumptions here; which the AI doesnât share. At some point it either works, or I know enough about why my assumptions were wrong. Either way, I will usually completely rewrite it. My assumptions are never right enough at the beginning for clean well-structured code.
Then I often do it again. Usually on the third time it starts feeling roughly right.
ok last reply on this for me. SO sorry you looked at the midi channel code lol. that specific one was my first try and really just curiosity seeing if it was actually possible, so i did literally no human refinement on it. i did clock those areas where it looks insane, but wanted to present it anyway because i honestly was not expecting it to work at all (the same way bing works for everything else). i have since done the reading and seen 500 pieces of content doing similar things and probably wouldnât have bothered if i had seen them first.
but that in turn reminds me of the discourse on here about âgeneral purpose computingâ, spaghetti code, âitâs ok to make a mess if you are just doing it for your own stuffâ, etc. which i personally find very empowering. and i carry the awareness that iâm doing things âthe wrong wayâ when i engage with LLMs for this.
of course the results are not going to be robust, maintainable, secure long term, will break unnecessarily in weird ways, etc. but i am not, nor am i aspiring to be, a software engineer.
so i personally see a lot of overlap between the conceptual framework of (what i view as) the monome ethos of demystifying and making code approachable etc, and the use of LLMs to assist you in that task. i understand ideologically they are completely different and incompatible. but i think itâs interesting that we talk about the freedom of writing bad point a to b code all the time in other threads, but when i talk about using a computer to help generate the bad code here, a bunch of people are talking to me about how my solutions arenât robust and i am robbing myself of a joy that can only be found by the deep understanding of codebase, preferably under the watchful eye of a teacher.
like, i assume this is true (i trust you over bing lol). but even you trying to explain this to me, i have no idea what this means, why itâs different from what i said, or why it matters.
which is fine! this isnât a personal issue for me. like, i use personal examples because i feel like i have the âcredibilityâ here to invite the debate instead of the initial reflexive dismissal. but for example, i wasnât frustrated by a slow forum response, i was guilty i had to ask at allâvery trad only child âmust be self sufficientâ baggage in my head. for people with related baggage, i think many eventually will turn to whatever imperfect solutions on the marketplace there are.
i donât know, but i strongly suspect that this is going to be more similar to the facebookification of the web than it will be to NFTs or whatever because a) itâs already backed by legacy institutional support in a way blockchain companies never really had until much much later in the game and b) itâs actually possible for normal people to use without them having to think about any of the horrible implications of using it. i think itâs worth thinking about how weâre going to engage with that as it happens, but i donât know what the answer is.
anyway thanks for reading along everyone and i will continue following the discussion with interest
cool, sorry about that and thanks for the feedback. iâll try again (possiibly failing again)
like for example redraw
is always defined, here:
[ norns/script.lua at main · monome/norns · GitHub ]
all these redraw
and init
and engine
things are global variables that are defined before the script is run, and are expected to be re-defined as new functions.
i guess i just think knowing that is better than assuming that norns lua mysteriously works by different rules than other lua. when you see a global variable, you can assume itâs defined somewhere, and if you search for its definition in the norns sources you can learn about how itâs used and infer how it should be re-defined if needed.
i am sorry that i am not a better explainer of things. it probably isnât my strong suit. as it happens i am thinking about offering some formal workshops in the near future, but they will be âintermediateâ level - i donât think i excel at introductions.
i mean⊠yeah i feel seen. itâs not a totally rational response. but i would be infinitely happier to engage with code that you wrote from scratch, no matter how weird or difficult it is:
-
because we both have the same kind of skin in the game, iâm not wondering whether it took you 12 hours or 12 seconds to get to this point.
-
because i can read that code and get some understanding of your thought process! the errors can reveal misconceptions or things that could be shown better in our docs, or whatever. wheras with the generated code i donât know which problems come from you (design/concept) and which come from the generator. at minimum i would have to pore over the history of your chat to get to the same level of understanding.
-
because as iâve said, all the generated code iâve seen / created with chatGPT has done weird stuff that a human wouldnât do. itâs tiring even to look for those errors when there is no âtrain of thoughtâ at all.
i mean, i know you see how this might be a crummy proposition so i wonât belabor it more.
a little tangential to the topic, relevant to this sub-thread: my part in the chatGPT hoo-hah started from my responses to this post. iâm learning a lot from the resulting discussions, largely about how people are maybe turning to the chatbots because they are better explainers than the norns dev team. maybe this is not really resolvable because bots have unlimited free time and we donât and that resource trumps all others. (yikes.)
but maybe we can also make some incremental changes in how we demonstrate stuff. as an experiment iâll take this âedward slicerhandsâ idea as a test case and start a dev diary thread on how i might go about approaching the implementation, from POC to wherever it goes.
This was a somewhat random late night impulse and I admit that Iâve not checked that the response is in anyway accurate⊠but Iâm once again stunned by the obvious fact that this model has been trained on all things public, and will present itself as capable of solving prompts for the teletype. Human input in quotes, bot responses in code blocks.
do you know what the Monome Teletype is and can you help me create scripts for it?
Yes, I'm familiar with the Monome Teletype! The Monome Teletype is a powerful scripting platform that allows you to control modular synthesizers and other devices using a text-based interface. I can definitely help you create scripts for it!
To get started, can you tell me a bit more about what you're trying to do with your Teletype? Are you trying to control a specific device or create a particular kind of sound? The more information you can give me, the better I'll be able to help you create the scripts you need.
I want a script that plays a new note for a pentatonic scale on each new trigger, please write this for me
Sure! Here's a script that will play a new note from a pentatonic scale each time it receives a trigger:
#PENTATONIC TRIGGERED NOTES
M 1
#List of Pentatonic Scale notes
I P.N 0 2 4 7 9
#Trigger Input
TR.P 1
#Script to play notes on output 1
SCRIPT 1:
L 1 5: CV 1 I P.N RAND 5: DEL 20
In this script, we use M 1 to set the script's metro to a default value of 1ms. I P.N creates a list of pentatonic scale notes. TR.P 1 waits for a trigger input on input 1. SCRIPT 1: defines a script that runs on output 1. L 1 5: sets up a loop that runs five times. CV 1 I P.N RAND 5: sets the CV output to a random note from the pentatonic scale list. DEL 20 sets a delay of 20ms between each note.
You can adjust the values to suit your needs. Let me know if you have any questions!
This is presented without any comment or criticism of the underlying paradigm⊠I honestly canât fully process any of that myself yetâŠ