Just to add some context, Strudel is TidalCycles ported from Haskell to JS. IMO, Haskell is a much nicer language for this stuff. Hopefully, now that GHC can output WebAssembly, someone can build a web-based music programming environment around the original TidalCycles instead.
This is pretty incredible to watch. I initially thought she must be pulling some kind of trick to make that look so fluid, but the fact that she is making very small typos and correcting them as she goes make it look very believable. This is really the first time I've watched someone use one of these tools and it feel like a musician using a new kind of instrument.
If you go back to the older videos she has like a decade of experience messing around with modular synths to make music live that is actually listenable.
She is also a main developer on the strudel project. If you want to contribute, it is open source:
Yeah, I'm watching more. These are incredible. I really like how she describes what she's doing in tempo with the music as she does it. The description is basically part of the performance. Really unique and engaging approach.
she is a producer, not making anything innovative music wise (she must have done similar things thousands of times), with a long experience in live music, and she is a/?the? core dev of the tool she is using.
honestly i think the planning is at most a few minutes long (once she decides what she will go for) then she probably let the experience talk.
I've watched a couple of her stuff, it's really inspiring and feels very cosy, like a slice of Internet that lives on its own and creates without being too bothered about the Algorithm™.
The person in that video really has an ear for synthesis. I've spent quite some time watching all the strudel videos and this creator consistently shows the best skill across genres.
In order, the most popular ones of these are probably
* Max. It's built into a popular DAW, and is shockingly capable as an actual programming language too. The entire editor for the Haken line of products is written in Max.
* Pure Data or Supercollider.
* Csound.
Not ordering things like Scala or LilyPond that are much more domain-specific.
When I was first introduced to Max it was on a Mac SE in 1989, and I really only used it for saving & restoring patches (on my SY77 and U110) until someone walked me through how it really worked. I didn't understand what it could do, and I rejected it at first because it was too open-ended for me to see utility. Lol. How things changed after that.
What really blows my mind is that I wasn't at all put off by the tiny little Mac monitor, it just seemed normal. No way I could work with such a small b&w screen today I'd go mad. (weirdly I feel less creative than i did in the 1980's and NOW i have near infinite recording & mixing options. The irony.)
Csound (I think v3) was the first music language I played with, back in the early 90s, under DOS even. Back then, running in real-time wasn't a thing. Generate a WAV file and play it after the program finished.
Later, at the end of the 90s, I remember playing with CLM/CM, in common lisp.
But the most productive experience was definitely SuperCollider. I can only recommend giving it a try. Its real-time sound synthesis architecture is great. Basically works sending timestamped OSC messages AOT (usually 0.2s). It also has a very interesting way of building up so-called SynthDefs from code into a DAG. I always wondered if a modern rewrite of the same architecture using JIT/AOT technology would be useful. But I digress... SC3 is a great platform to play with sound synthesis... Give it a try if you find the time.
I can vouch for the tutorial series from Eli Fieldsteel[0] for getting into SuperCollider and audio synthesis in general. If you were ever curious on how to bridge the gap between signal processing and music theory through mathematical operations, I think this is one of the best series out there.
‘Your own enjoyment’ is a rich reward. My unsolicited advice: Try making a mess with it everyday for a week / month / year and see if you don’t start to appreciate something in what you make. Orca is a brilliant piece of work.
Sonic Pi is missing imo. (Some have mentioned Strudel, it’s a similar live-coding music platform). Admittedly Ruby-based, but it seems some of the other ones on the list are libraries/forms of other langs too.
Sonic Pi is by far the most accessible way to play with these tools. It's designed to teach music and coding to kids and has great starter tutorials, and a ton of depth as well. Check it out!
I really hope that Max becomes fully accessible in a text based format one day. It's so cool and I've spent a few months randomly through the years building neat plugins for Ableton but, for me, it would be so much stickier if it was code. Especially now with AI assistance, Claude can still be helpful but it hallucinates a lot harder when trying to describe visual code.
you can get the LLM to output max patches in JSON and copy paste directly into max. it was pretty decent at it when I tried and would probably be even better with relevant recent documentation in context.
Would love to see this, as someone who has been heavily using Live since 2006 and is finally getting into proper coding in middle-age. Having a way to augment Live in a text-based coding format would be greatly welcome.
While I'm not holding my breath, Ableton the company are transitioning into a steward-ownership model in which the stewards will have decision rights over the company. So I have hope that it will continue to grow in ways that are less affected by market considerations and that are a little more opinionated and specialized. Not to mention that Ableton own Cycling 74 (creators of Max/MSP).
I had no idea! And I'm learning Javascript, so that's a nice coincidence.
I was deep into Max/MSP around 2010 and made a personal vow to leave it alone. The potential to reinvent the wheel and build tools instead of completing records was too much.
Now I'm in a more mature place, so I could see myself diving back into it eventually.
Most of the languages on the list have not been maintained in decades with many being for functionally extinct if not completely extinct systems. It is not really a list meant to guide you to a language to use, it is more about historical/academic interest.
Relevant to this discussion - my project Glicol (https://glicol.org) addresses this space. Currently working on a no_std rewrite, demo coming next year :)
I love seeing a Definition List (DL/DT/DD html tags) in the wild. Often more hassle than its worth to make them appear the way you want, but semantically pleasing and underused.
Their structure in the markup can be a bit confusing imo - something more like a <figcaption> inside a <figure> or a <legend> inside a <fieldset> would be much nicer imo.
The spec even mentions [0] that you're allowed to use <div>s to group dt/dd pairs for styling purposes.
combine it with a <details> and <summary> inside the <dd> and a little CSS checkbox toggle for JS-less "show all details"/"hide all details" and it's pure gold.
I haven't, yet, found a good way to implement filters (low-pass, high-pass, band-pass, etc.). It does not have Fourier transform, and we cannot operate on the frequency domain. However, the moving average can suffice as a simple filter.
```
I wonder if there's a way to implement the FFT using subqueries and OVER/partitioning? That would create a lot of room for some interesting stuff to happen, specifically making it easy/possible to implement filters, compression, reverberation, and other kinds of effects.
Two other primitives that would be valuable to figure out:
1. How to implement FM/phase distortion. You can basically implement a whole universe of physical modeling if you get basic 6 op sine wave primitives right with FM + envelopes.
2. Sampling/resampling - given clickhouse should do quite well with storing raw sample data, being able to sample/resample opens up a whole world of wavetable synthesis algorithms, as well as vocal editing, etc.
Honestly, although the repo's approach is basic, I think the overall approach is wonderful and have wanted to be able to use SQL to write music for a while. I've spent a lot of time writing music in trackers, and being able to use SQL feels like it would be one of the few things that could spiritually be a successor to it. I've looked at other live coding languages, many of which are well built and have been used by talented people to make good music (such as Tidal, Strudel, etc). But all of it seems like a step down in language from SQL. I'd rather have their capabilities accessible from SQL than have to use another language and its limitations just to get those capabilities.
Food for thought -- thanks for the interesting and thoughtful work!
Very creative guy operating this site (look at this! https://timthompson.com/spacepalette/) though it looks like it’s been idle the past 4 years or so? The live-coding community around tidal cycles will point you to a the fruit of missing projects like tidal-cycles and strudel. A strong inviting community: https://club.tidalcycles.org/
There were a bunch of interesting aspects to this project. One of my favorite things was developing the user programming model. Organizing your music using functions is very powerful.
Sonic Pi is SuperCollider, but using Ruby instead of the default sclang language. Overtone is similar (and possibly originally by the same developer, iirc?) but using Clojure, and is also missing from the list.
Yeah, that's some glaring omissions - not including Sam Aaron's work makes me distrust the whole list. SonicPi is fundamental for teaching kids music and programming and Overtone is just mind-blowing - I watched people DJing music while evaling things in Emacs, that looked sick.
Yesterday i used Claude Code to define and implement a YAML based DSL for playing backing tracks. I can ask an LLM to generate this DSL for any well known song, and it will include chord progression, lyrics, bass, drums, strumming pattern, etc. It's a go command line tool that plays the DSL via midi, and displays the chords, strumming patterns, and lyrics. Also does export to Strudel.
The problem I see is: people are not going to use a project that is AI generated for long really, unless they do it just for a one-off task. I'd like to constantly generate new music. I also have ideas based on existing music so I want to adjust this, but do so programmatically, and that seems ... hard.
Not a big commitment from a user, and nothing lost if it doesn´t work as hoped.
I'm just positively surprised how quickly you can create a prototype for these sorts of ideas with Claude Code. This is literally just a few hours of vibe-coding.
Depending on the source music, there are many aspects of this that normally require a license with a records company or some proxy. Especially the lyrics part. Be careful not to get into very expensive trouble. Just because the LLM can do it, doesn’t mean it’s ok to do it.
Yes, I noticed that Claude Code silently refused to generate lyrics for some songs i requested. Benefit of this approach is that anybody can quickly generate a YAML file for a backing track, no need to share it anywhere.
I think the problem is that the artist doesn't get anything with this approach. If you really want to use someone else's music/artworks/lyrics, just buy it.
It's not like this is very unique, YouTube has tons of training and backing track videos, which is what i typically use. And artist don't sell it in a way that can be consumed for guitar practice easily.
In the comments, I saw reference to MML ( Music Macro Language ... not exactly what I think the MML is on the list. ) Here's the one referenced in the HN post.
Musicabc has some really nice JS and Obsidian plugins that essentially allow you to create little scrapbooks of musical ideas in markdown that are also playable as sound and viewable as sheet music.
I kind of want to create music programmatically but
so far it has been way too difficult. I also can
barely find anything useful via oldschool google
search anymore. I am almost stuck like with MIDI
here ...
I'm curious what you did with it? I spent a little time with ChucK with the Oxford Laptop Orchestra (as was) which was an offshoot from the Princeton one. I was there as a technologist, not a musician. Always had a soft spot but never found myself using it again.
I mostly use it for learning things. How does this guitar pedal effect work? Why does this Eurorack module sound so good? How can I drive this MIDI instrument from this OSC controller? etc.
Ideally there would be an easy path from ChucK to implementing all of these things in hardware but I haven't quite got there yet.
i was transcribing some songs for violin after picking it back up (mostly metal, which i have to take some liberties with to sound good on a violin + kick drum :> ), and thought about writing a language (maybe a rust steel module) to hand the typesetting for me, as writing out & erasing e.g. slurs can take a while. but lilypond really is good enough that there wasn't much about it i'd want to change, either syntactically or semantically (as really, i only need a very small subset of it). any language i do write, if i choose to, would probably use it as a backend --- its rendering is very good :)
have you figured out a good tool flow for going from music to transcription?
I've used ai.splitter to generate stems, but need to go and identify tones and notes before plotting on to a sheet of music. I'm looking at doing this as a beginning cello student.
to be honest, i've been playing violin for a number of years and my strategy is still to listen to a part of the song, rewind until i can play it (even if slowly), then write that down. some of the pieces i want to write down are twin-guitar pieces, where i need to (generally) choose the melodic guitar over the harmonic. i haven't found AI good at that, but, thinking now, i haven't tried it in years, so it may have gotten good enough? sorry for the lack of much insight, lol. (for metal, finding tabs online can at least help with the rhythm, so i just need to try and transcribe the notes & flourishes).
And at least 5 times a year someone designs a new one where it is painfully obvious that they're almost entirely unaware that anyone has ever designed one before - or if you're very lucky, maybe they've heard of ABC.
There is also literate programming for music, right? Just like Donald Knuth describes it in his literate programming approach? See for example the videos by Fauci etc. They say things like eh eh, pause then play music using items such as a pen, there is even a conductor. Very entertaining. Is that true? Or just my imagination?
A few months ago I outlined a spec for a new modern programming language inspired by LilyPond I call Capo. I haven’t done anything with it yet but the idea is that it compiles to MNX, which is the (still in development) successor to MusicXML, becoming a language that could be used as a scripting language in any program that supports MNX or as a standalone text-based music tool. Thought this group might find it interesting: https://github.com/Capo-Lang/capo
https://youtu.be/aPsq5nqvhxg
reply