Oh yay, just what we need, ANOTHER one of these. Did the YA in YAML not already give a clue? Now there are 15 competing standards. Please make it stop. S-expressions were all that we wanted in the first place.
The author of this language seems to have responded with AI-generated arguments in response to all questions linked in the FAQ section. This does not inspire much confidence for the design of the language.
Really, I do not see the point of this. These configuration languages are just different syntaxes for expressing the same fundamental data, bearing the same semantics. It would be much more interesting to see a language which experiments with what is fundamentally representable, for example like how the Nix language supports functional programming and has functions as a first-class data type.
> These configuration languages are just different syntaxes for expressing the same fundamental data, bearing the same semantics.
This is my complaint too. However, they do add a proper integer type, which is the only thing that they do change with the data, as far as I can tell.
> It would be much more interesting to see a language which experiments with what is fundamentally representable
DER (and TER, which is a text format I made up to be compiled into DER (although TER is not really intended to be used directly in application programs); so TER does have comments, hexadecimal numeric literals, and other syntax features) does support many more data types, such as arbitrarily long integers, ASCII, ISO 2022, etc. My own extension to the format adds some additional types, such as a key/value list type and a TRON string type; the key/value list type is the only nonstandard ASN.1 type needed (together with a few of the standard ASN.1 types: sequence, real, UTF-8 string, null, boolean) to represent the same data as JSON does.
> for example like how the Nix language supports functional programming and has functions as a first-class data type.
For some applications this is useful and good but in others it is undesirable, I think.
> It would be much more interesting to see a language which experiments with what is fundamentally representable
You might checkout my project, Confetti [1]. I conceived of it as Unix configuration files with the flexibility of S-expressions. I think the examples page on the website shows interesting use cases. It doesn't have a formal execution model, however, for that you might checkout Tcl or Lua.
> AI-generated arguments in response to all questions
There are currently two items in the FAQ. While the first one seems to be formatted with AI (I don't know if the arguments are AI generated though, how do you tell?), the other certainly doesn't look AI-generated: https://github.com/maml-dev/maml/issues/3#issuecomment-33559...
The person who opened the issue specifically complained about getting an AI-generated reply before closing it. If you view the edit history for the message, or the language author's second response, you will see that the reply was edited afterhand to not be transparently sloppy.
This is basically JSON for humans. YAML is harder to use due to significant indentation (easy to mess up in editors, and hard to identify the context), and TOML isn't great for hierarchical data.
It addresses all my complaints about JSON:
> Comments
> Multiline strings
> Optional commas
> Optional key quotes
I wish it was a superset of JSON (so, a valid JSON would also be valid MAML), but it doesn't seem to be the case.
EDIT: As I understand, HCL is very similar in terms of goals, and has been around for a while. It looks great too. https://github.com/hashicorp/hcl/
My experience is different: TOML isn't obvious if there's an array that's far from the leaf data. Maybe that's what you experienced with the hierarchical data?
In my usage of it (where we use base and override config layers), arrays are the enemy. Overrides can only delete the array, not merge data in. TOML merely makes this code smell more smelly, so it's perfect for us.
They fix some of the problems with syntax of JSON but do not fix most of the problems with the data model; the only thing they do fix is that now there is a integer type. It still has the other problems, e.g. it still uses Unicode and still requires keys to be strings. For a configuration language, it can also be useful to have a application-specific data.
Why are they optional? Why not just make them mandatory? So I don't need to guess which chars need quotes.
Edit:
What most languages also lack: semantics on de-serialization. In the best case, I want to preserve formatting and stuff when the config is changed/re-committed programmatically.
I believe every ambitious programmer makes a configuration language at some point, but most either keep it to themselves.
When I was a teen I made something called Nabla:
* XML-like syntax
* Schema language
* Compact binary representation
* Trivial parser for binary representation
* Optionally, simple dynamic programming language on top
Initially made it for my 3d engine scene serialization format, but then used everywhere some non-trivial data format was needed (e.g. anything with nested data structures).
I wouldn't say it's the most readable. Values can be ambiguous, YAML anchors are powerful but complicated, and using indentation to define structure means that you're never quite sure to which node something belongs to. And good luck hunting down weird errors if you mistakenly screw up the indentation.
YAML is also often abused as a DSL and for very large documents (Ansible, k8s, GH Actions, etc.), which makes it a pain to work with.
It's not so much that liking all of this is controversial. It's just a bad opinion. :p
I love that in the end, everything still comes down to using bash (and env vars), because for all its footguns and strings, it's still the most reasonable choice when giving up on the zoo of newer formats. I expect it to outlive us all, like our unergonomic keyboards, and having to deal with null values.
Am I missing something or is this literally just Ruby? Like - it doesn’t list Ruby as a supported language, but, it also looks like fully executable Ruby code?
When this was first posted a couple of weeks ago by the spec's author, I took it as an opportunity to see how quickly I could spin up an IntelliJ language plugin since the last time I worked on a language plugin was pre-GPT (Klotho Annotations - basically TOML inside of @annotations inside comments or string literals in a variety of host languages). Back then, it took a week for me to figure out the ins and outs of basic syntax highlighting with GrammarKit.
This time around, I worked with Claude Code and we basically filled in each other's knowledge gaps to finish implementing every feature I was looking for in about 3 days of work:
Day 1:
- Plugin initialization
- Syntax highlighting
- JSON Schema integration
- Error inspections
Day 2:
- Code formatter (the code style settings page probably took longer to get right than the formatter)
- Test suite for existing features
Day 3:
- Intentions, QuickFix actions, etc. to help quickly reformat or fix issues detected in the file
- More graceful parsing error recovery and reporting
- Contextual completions (e.g., relevant keys/values from a JSON schema, existing keys from elsewhere in the file, etc.)
- Color picker gutter icon from string values that represent colors (in various formats)
I'm sure there are a few other features that I'm forgetting, but at the end of the day, roughly 80-85% of the code was generated from the command line by conversing with Claude Code (Sonnet 4.5) to plan, implement, test, and revise individual features.
For IntelliJ plugins, the SDK docs tend to cover the bare minimum to get common functionality working, and beyond that, the way to learn is by reading the source of existing OSS plugins. Claude was shockingly good at finding extension points for features I'd never implemented before and figuring out how to wire them up (though not always 100% successfully). It turns out that Claude can be quite an accelerator for building plugins for the JetBrains ecosystem.
Bottom line, if you're sitting on an idea for a plugin because you thought it might to take too long to bootstrap and figure out all the IDE integration parts, there's never been a better time to just go for it.
Working with Terraform, and needing to handle the complexity of our per-client deployments at work, I ended up creating a bash layer that takes N number of JSON files, performs a deep merge, and spits out a .tfvars file.
As you’ve said, all I did was fork a JSON.stringify function and swap colon for equals.
Anyone have a better solution they’ve worked with?
Edit: Why the downvotes? Terraform is using HCL? Are we talking a different HCL here?
You can directly pass JSON to Terraform by putting it in a `.tfvars.json` file [1], as far as I can tell this has been supported since v0.5.0 which released in May 2015.
Terraform doesn't have a built-in deep merge function, but it will merge `.tfvars.json` files in the order given on the CLI, if you specify multiple `-var-file` arguments. For what it's worth, as of Terraform 1.8, you can also use functions from third-party providers like isometry/deepmerge [2] to perform a deep merge.
reply