Hacker Newsnew | past | comments | ask | show | jobs | submit | Rochus's commentslogin

This is an interesting service. The audio quality seems similar to Suno 4.5, and the "creativity" and "musicality" is pretty good as well. Here are two tracks I was able to generate (download was only possible via network monitor though):

- http://rochus-keller.ch/Diverses/oaimusicgen.com_second_atte...

- http://rochus-keller.ch/Diverses/oaimusicgen.com_first_attem...

There are indications that this is a wrapper service to AIMusicGen.ai, and that it is operated in China. Anyway, I think the results are promising. You can compare the songs with the ones I generated on Suno 5 with similar prompts: https://rochus-keller.ch/?p=1428.


The major Swiss media outlets (SRF, Tages-Anzeiger, Blick, Watson) have reported on this, but mostly under headlines that classify Baud as a propagator of propaganda or conspiracy theories (e.g., SRF: “Russia's mouthpiece,” Blick: “Between censorship and Putin propaganda”).

I would at least have expected a critical examination of the legality of the sanctions; that was almost exclusively in the Weltwoche (Köppel interview) or in letters to the editor. The mainstream media largely adopted the EU's wording. That's creepy.

The Swiss Federal Council would do well to show a little more backbone in the face of this Kafkaesque European bureaucratic autocracy. Baud makes it clear in the interview that none of the accusations made by these bureaucrats are true. I am not aware of any evidence having been presented by them. So I would expect the media to at least give both sides sufficient consideration.

We remember well that also in the case of the American activist Charlie Kirk (whose death in September 2025 and the subsequent refusal of the EU Parliament to grant him a minute's silence caused outrage), labels such as “radical” or “disinformer” were often used instead of debating specific statements. The pattern appears to be the same: quotes were taken out of context to disparage the person, which then is classified as a “security risk” and removed from the public sphere (in Baud's case through account suspension and entry bans, in Kirk's case through platform bans or political ostracism).


Great that this lecture (series) was recorded and we are able to watch it today! Thanks for sharing.

That's an interesting presentation and a great project. I wasn't aware how many distinctive design concepts are present in this database. That's impressive. Hope I will find time in the new year to use it in a project.

Do you think that "most performance issues" in Python are solved?

e.g. people who use exceptions and don't use destructors?

> because it is using the Boehm GC

For what reason? Mono has a pretty good precise GC since many years.


Yes, SGen should be a lot better, but Unity cannot use it because they hold and pass raw pointers around everywhere. That's fine for Boehm but not possible with SGen. They're working on fixing this already but not sure why they aren't planning a move to a better GC.

Well, if they port to .NET (CoreCLR), that will move them to the MS GC.

Yes, but it also puts them in an awkward situation! They recommend (or even require, for some platforms) using IL2CPP for release builds which will still use Boehm GC and not run as quick as CoreCLR.

Do they still need IL2CPP if they have AOT? The goal was always to be able to have cross-platform native binaries right?

In theory yes, IL2CPP doesn't need to exist with modern .NET AOT support. In practice, per quotes in the article Unity may have a bit of a sunk cost issue and has no plans to support .NET AOT, only IL2CPP.

Some of that sunk cost may be the above mentioned pointer issue and not enough current plans for a smarter FFI interface between C++ and C#.


Unfortunately they do still need IL2CPP because Unity took a different direction than .NET: most reflection still works with IL2CPP but does not with .NET AOT. Switching would be a huge breaking change for everyone, including Unity.

Platform support is also still better with IL2CPP but .NET is catching up.


That's interesting. I made measurements with Mono and CoreCLR some years ago, but only with a single thread, and I came to the conclusion that their performance was essentially the same (see https://rochus.hashnode.dev/is-the-mono-clr-really-slower-th...). Can someone explain what benchmarks were actually used? Was it just the "Simple benchmark code" in listing 1?

I think a lot of the devil is in the details, especially when we look at NET8/NET10 and the various other 'boosts' they have added to code.

But also, as far as this article, it's noting a noting a more specific use case that is fairly 'real world'; Reading a file (I/O), doing some form of deserialization (likely with a library unless format is proprietary) and whatever 'generating a map' means.

Again, this all feels pretty realistic for a use case so it's good food for thought.

> Can someone explain what benchmarks were actually used?

This honestly would be useful in the article itself, as well as, per above, some 'deep dives' into where the performance issues were. Was it in file I/O (possibly Interop related?) Was it due to some pattern in the serialization library? Was it the object allocation pattern (When I think of C# code friendly for Mono I think of Cysharp libraries which sometimes do curious things)? Not diving deeper into the profiling doesn't help anyone know where the focus needs to be (unless it's a more general thing, in which case I'd hope for a better deep dive on that aspect.)

Edited to add:

Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?

E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?

Also, I know I vaguely hit on this above, but also wondering whether the IL being done is just 'not hitting the pattern'. where a lot of CoreCLR will do it's best magic if things are arranged a specific way in IL based on how Roslyn outputs, but even for the 'expected' C# case, deviations can lead to breaking the opt.


The goal of my compiler is not to get out maximum performance, neither of CoreCLR nor Mono. Just look at it as a random compiler which is not C#, especially not MS's C# which is highly in sync and optimized for specific features of the CoreCLR engine (which might appear in a future ECMA-335 standard). So the test essentially was to see what both, CoreCLR and Mono, do with non-optimized CIL generated by not their own compiler. This is a legal test case because ECMA-335 and its compatible CLR were exactly built for this use-case. Yes, the CIL output of my compiler could be much more improved, and I could even get more performance out of e.g. CoreCLR by using the specific knowledge of the engine (which you cannot find in the standard) which also the MS C# compiler uses. But that was not my goal. Both engine got the same CIL code and I just measured how fast it run on both engines on the same machine.

> Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?

> E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?

The C# standard lib (often called the base class library or BCL) has seen a ton of Span<T>/Memory<T>/stackalloc internal usage adoption in .NET 6+, with each release adding more of them. Things like File IO and serialization/deserialization particularly see a lot of notable performance improvements just from upgrading each .NET version. .NET10 is faster than .NET9 with a lot of the same code, and so forth.

Mono still benefits from some of these BCL improvements (as more of the BCL is shared than not these days, and Blazor WASM for the moment is still more Mono than CoreCLR so some investment has continued), but not all of them and not always in the same ways.


> The C# standard lib (often called the base class library or BCL) has seen a ton of Span<T>/Memory<T>/stackalloc internal usage adoption in .NET 6+, with each release adding more of them. Things like File IO and serialization/deserialization particularly see a lot of notable performance improvements just from upgrading each .NET version. .NET10 is faster than .NET9 with a lot of the same code, and so forth.

I worded my reply poorly, mostly in that I meant 'If Oberon has it's own stdlib, is it doing the modern performant practice' ?


What's going on with the Mandelbrot result in that post?

I don't beleive such a large regression from .NET framework to CoreCLR.


NGL would be nice if there was a clear link to the cases used both for OP as well as who you are replying to... Kinda get it in OP's case tho.

I measured the raw horsepower of the JIT engine itself, not the speed of the standard library (BCL). My results show that the Mono engine is surprisingly capable when executing pure IL code, and that much of the 'slowness' people attribute to Mono actually comes from the libraries, not the runtime itself.

In contrast, the posted article uses a very specific, non-standard, and "apple-to-oranges" benchmark. It is essentially comparing a complete game engine initialization against a minimal console app (as far as I understand), which explains the massive 3x-15x differences reported. The author is actually measuring "Unity Engine Overhead + Mono vs. Raw .NET", not actually "Mono vs. .NET" as advertized. The "15x" figure comes very likely from the specific microbenchmark (struct heavy loop) where Mono's optimizer fails, extrapolated to imply the whole runtime is that much slower.


Can we reproduce your results for Mandelbrot?

You can find all necessary information/data in the article (see references). Finding the same hardware that I used might be an issue though. Concerning Mandelbrot, I wouldn't spend too much time, because the runtime was so short for some targets that it likely has a big error margin compared to the other results. For my purpose this is not critical because or the geometric mean over all factors.

I think we are trying to find something like 'can we pull this branch/commit/etc and build it to reproduce'.

The Mono and .Net 4 times were too short; the true time is unknown. I only left the Mandelbrot result because I got a decently looking figure for CoreCLR, but the actual factor to Mono is unreliable. If the Mono result was 1, the factor would still be seven. I have no idea why it is that much faster.

I think the “some years ago” is pretty relevant.

.NET has heavily invested in performance. If I understand your article correctly, you tested .NET 5 which will be much slower at this point than .NET 10 is.

I also think it matters what you mean by “Mono”. Mono, the original stand-alone project has not seen meaningful updates in many years. Mono is also one of the two runtimes in the currently shipping .NET though and I suspect this runtime has received a lot of love that may not have flowed back to the original Mono project.


(2007)

> I hate this language with the intensity of a thousand suns

Interesting. What's the main reason? Do you already have representative experience with Typst? Does it really solve the issues you perceived with Latex? Are you satisfied with the typographic quality of Typst? (why I'm interested: https://github.com/rochus-keller/typos/)



Thanks for the hints. The projects have different goals. My goal is to use a stand-alone luatex engine and integrate it with my new typesetting language, completely replacing TeX. MMTeX still uses TeX, but make an easier to install package using the OpTeX format; that's a very good approach for people who want to use OpTeX and don't care about the whole TexLive machinery. Speedata is closest to what I intend, but their language is XML based and optimized for catalogues, and they depend on a pre-existing luatex installation (so they could use mine instead). LuaMetaTeX is the successor of LuaTeX which essentially moves more implementation to Lua. To my surprise, I actually didn't find a pre-existing project so far which shares my goals.

Have you considered using ConTeXt's CLD [0]? It's still uses TeX underneath, but that's pretty well insulated from the end user. Here are some random examples that I've written [1] [2] [3].

[0]: https://www.pragma-ade.nl/general/manuals/cld-mkiv.pdf

[1]: https://github.com/gucci-on-fleek/unnamed-emoji/blob/master/...

[2]: https://github.com/TikZlings/Extravaganza2025/blob/main/max/...

[3]: https://tex.stackexchange.com/a/715598/270600


I'm successfully using ConTeXt in a radar data evaluation and reporting project for more than ten years where a C++ based generator reads data from binary Matlab files and combines/formats them into perfectly layouted reports automatically. It's an impressive technology and if the users don't mind to have an extra ConTeXt installation on their system (which grows pretty large), the solution is perfect.

For my present project, I would like to start "from first principles" and not just add layers on top of things I don't fully understand. Thanks for the CLD document. I have read it when I started the mentioned project. I have no doubt from my own experience that the typesetting quality is excellent. But I consider using Lua for such large-scale achievements a mistake for architectural, maintainability und eventually performance reasons. The "flexibility" of this approach is payed with a high price.


> I'm successfully using ConTeXt […]

Ah, nice! ConTeXt certainly isn't the solution to every problem, but it solves lots of the common complaints about LaTeX, and most people have never heard about it, so I always try and advertise it when I see a problem that seems like a good fit.

> if the users don't mind to have an extra ConTeXt installation on their system (which grows pretty large)

> For my present project, I would like to start "from first principles"

> But I consider using Lua for such large-scale achievements a mistake for architectural, maintainability und eventually performance reasons.

Well I personally disagree regarding Lua :), but all your points are very fair. Most people who say "I'm going to write my own typesetting system" severely underestimate the difficulty, but it sounds like you have a pretty good handle on the situation, so I'm rather hopeful that you'll accomplish your goal.

Once you're able to typeset something, we'd gladly accept a progress report at TUGboat [0], since this is a topic that lots of people are interested in but very few have accomplished.

[0]: https://tug.org/TUGboat/


Concerning Lua: there are approaches for years to add static typing to Lua, but from my point of view (as a language developer) none was really good when compatibility with Lua on language level (such as TypeScript vs. JavaScript) is a goal. With my own languages I only reused the engine without Lua compatibility, and the resulting language is much better suited for large-scale software engineering. But the approach based on LuaJIT turned out to be too brittle, so I switched to the CLI Mono engine, which is much faster (factor two) and much more stable.

> Once you're able to typeset something, we'd gladly accept a progress report at TUGboat

Currently I try to find out whether a Pascal based typesetting language with static typing would be worthwhile or not. Otherwise I will likely implement something like Typst (the latter is actually the reason I started this journey because it is for one part a much better language than TeX, but for the other part the typesetting quality is much worse, and this doesn't seem to change for years when looking at the roadmap).


(Author of speedata Publisher here)

Just a small note: the speedata Publisher ships with a LuaTeX, so you don't need it pre-installed.


Cool, good to know. But I assume you use a standard version, part of TexLive or a subset thereof? What libraries in addition to the luatex executable to you need? I'm currently integrating the fontloader into the executable binary so that I don't have to deploy a directory structure.

> so that I don't have to deploy a directory structure

LuaTeX itself doesn't depend on any particular directory structure. kpathsea does expect a certain directory structure, but you don't need to use kpathsea—LuaTeX was primarily developed for ConTeXt, which doesn't use kpathsea at all. Without kpathsea, you'll need to supply the appropriate callbacks in a Lua initialization script, but this is fairly simple.


Thanks, I've seen that. So far kpathsea didn't get into my way and caused less trouble than the Lua code I had to use to (hopefully) achieve optimal typsetting quality. But kpathsea is indeed on my list of disposable parts.

And for a system like speedata at least the Lua implemented luaotfload machinery had to be accessible somewhere in a known directory. When they now switched to LuaHBTex (as I likely will as well) this dependency can be avoided without losing typesetting quality (as far as I've understood so far).


I use the standard binary from TeX-live and use the integrated harfbuzz loader, which works like a charm.

The downloadable ZIP is ready to run and does not need any additional libraries. (That said, I include some helpers, for example for https access. )


Thanks for the info. I now have a pretty lean stand-alone build of luatex 1.10.0 without all the autotools fuzz. Currently I'm testing on Linux.

> stand-alone build of luatex 1.10.0

Why such an old version? The latest version is 1.24.0, which should have lots of new features and fixes relative to v1.10. And if you're starting from scratch and not planning on using any of the TeX stuff, I'd really recommend using LuaMetaTeX since it cleans lots of stuff up, and doesn't use autotools at all.


The idea was to use the version which was officially declared as "complete" and also the last before the development of the HB integration started. My focus is on stability. The idea was to go with a minimal system and even integrate luatex-fonts-merged.lua as part of the binary, so there are no external dependencies.

The problem seems rather to be, that luatex-fonts-merged.lua is not really usable outside of the ConTeXt or TexLive tree, because it makes a lot of assumptions which are not met in my environment, and finding such dependencies in ~40kLOC of a dynamic language is nearly impossible. There was e.g. the assumption that the global utf variable pointed to unicode.utf8 (at least that's what I assume). So I had to set that before loading luatex-fonts-merged.lua. I assume there are many more such "tricks". After two days of experimenting, not even a "Hello World" is properly typeset. First the two words were apart half a page, and now they are set on two lines and I don't find out why.

These experiments brought me to the conclusion, that implementing so fundamental and complex logic in Lua is a very bad idea. So I'm currently evaluating LuaHBTex in the stable version 1.18.0. I like that it is still in C99 and C++ <= 11 (at least that's what I understood so far). I also had an intense look at LuaMetaTeX but don't like the architectural decision to do even more stuff in Lua. I'm using and integrating this language since 25 years and think it is good for a bit of glue code (max. 1-2 kLOC), but larger code bases pass the limit of practicability of a dynamic language. I even implemented a few statically typed languages which generate Lua source and bytecode to reuse the technology without this disadvantage.


> The idea was to use the version which was officially declared as "complete" and also the last before the development of the HB integration started.

HB is implemented as just another library, and it's pretty easy to exclude with the build script (ConTeXt does this).

> The problem seems rather to be, that luatex-fonts-merged.lua is not really usable outside of the ConTeXt or TexLive tree

So option 1 is to base things off of `luatex-plain` [0], which I believe is fully self-contained, but mostly undocumented. Option 2 is to base things off of luaotfload, which only depends on lualibs and luatexbase.

> There was e.g. the assumption that the global utf variable pointed to unicode.utf8 (at least that's what I assume).

It needs to be in the environment where you load the file, but that doesn't necessarily need to be the global environment. This is the fourth argument of the Lua "load" function.

> First the two words were apart half a page

I would randomly guess that your issue is that \parfillskip is initialized to 0pt from iniTeX, so the problem might go away if you typeset the text in an \hbox or node.hpack, or if you set \parfillskip/tex.parfillskip to "0pt plus 1fil".

> I like that it is still in C99 and C++ <= 11

I think that the latest versions of LuaTeX still only use C99 features, but they're also C23 compatible. The only thing that should use newer versions of C++ is HB I think.

> I also had an intense look at LuaMetaTeX but don't like the architectural decision to do even more stuff in Lua.

Fair enough :). The main thing that was moved from C to Lua was the PDF backend, because it's seriously unfun to write a PDF parser/writer in C.

[0]: $TEXMFDIST/tex/generic/context/luatex/


> base things off of `luatex-plain`

I have that on the radar indeed, but as far as I understand it still depends on plain.tex which again depends on tons of 1986 stuff (like old fonts) which I want to avoid. But due to my present experiments with the 40kLOC Lua machinery I'm scared of integrating this thing and would rather opt for avoiding it.

> that \parfillskip is initialized to 0pt from iniTeX

I did a lot of instrumented runs where values were printed and looked "unsuspicious". My best guess at the moment is that the Lua based machinery has an issue with the space in DejaVuSans.ttf, but that's just an assumption. I already stopped this path due to its apparent fragility and my reluctance of the architectural approach (I initially thought that everything required to do high-quality typesetting was already part of the luatex executable, but only realized this fragile dependency when doing larger test cases on my downsized engine version).


LuaHBTeX support has improved in the current version (next year's TeX-live), for example support for subsetting fonts.

But this requires C23 and C++17 now, isn't it?

The dependencies might have a minimum C++ version, but I'd be pretty surprised if the core LuaTeX files couldn't be compiled with C99/C11. The recent C23 commits are just about adding support for compiling in C23 mode, so the main change is no longer using the implicit "everything is an int" function prototypes.

Thanks. The HB C++17 dependency is already a show-stopper for me. I want to be able to build everything on GCC 4.8, or even better 4.7. I also did some research what actually changed since 1.18.0 and didn't find features my project would depend on.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: