Yes, the blue and orange dots are from the water and parks Nodes and Ways in the OSM data.
It doesn't look like the orange and blue colors are part of the theme definitions, so the rendering library may be using some default values. This is why they are rendered in the same color on images using different theme files.
The March 2025 blog post by Anthropic titled "Tracing the thoughts of a large language model"[1] is a great introduction to this research, showing how their language model activates features representing concepts that will eventually get connected at some later point as the output tokens are produced.
The associated paper[2] goes into a lot more detail, and includes interactive features that help illustrate how the model "thinks" ahead of time.
Is this only about remote MCP servers? The instructions all seem to contain a URL, but personally almost all the MCP servers I'm running locally are stdio based and not networked. Are you planning to support those in some way?
There's also this new effort by Anthropic to provide a packaging system for MCP servers, called MCPB or MCP Bundles[1]. A bundle is a zip file with a manifest inside it, a bit like how Chrome extensions are structured (maybe VSCode extensions too?).
Is this something you're looking to integrate with? I can't say I have seen any MCPB files anywhere just yet, but with a focus on simple installs and given that Anthropic introduced MCP in the first place, I wouldn't be surprised if this new format also got some traction. These archives could contain a lot more data than the small amount you're currently encoding in the URL though[2].
That's a good point, we really think that the future of MCP servers are remote servers, as running "random" software that has little to no boundaries, no verification or similar shouldn't be a thing. Is there a specific reason, you prefer stio servers over http servers? Which servers are you using?
> Is there a specific reason, you prefer stio servers over http servers?
Yes: the main reason is that I control which applications are configured with the command/args/environment to run the MCP server, instead of exposing a service on my localhost that any process on my computer can connect to (or worse, on my network if it listens on all interfaces).
I mostly run MCP servers that I've written, but otherwise most of the third party ones I use are related to software development and AI providers (e.g. context7, Replicate, ElevenLabs…). The last two costs me money when their tools are invoked, so I'm not about to expose them on a port given that auth doesn't happen at the protocol level.
Most software we install locally is at least distributed via a trusted party (App Store, Play Store, Linux package repos, etc) and have a valid signatur (Desktop & Mobile) or are contained in some way (containers, browser extensions, etc..).
In the case of MCP, remote servers at least protect you from local file leakages.
They're not just from AI-generated text. Some of us humans use en dashes and em dashes in the right context, since they're easy to type on macOS: alt+hyphen and alt+shift+hyphen respectively.
On both iOS and modern Android I believe you can access them with a long press on hyphen.
Does Dia support configuring voices now? I looked at it when it was first released, and you could only specify [S1] [S2] for the speakers, but not how they would sound.
There was also a very prominent issue where the voices would be sped up if the text was over a few sentences long; the longer the text, the faster it was spoken. One suggestion was to split the conversation into chunks with only one or two "turns" per speaker, but then you'd hear two voices then two more, then two more… with no way to configure any of it.
Dia looked cool on the surface when it was released, but it was only a demo for now and not at all usable for any real use case, even for a personal app. I'm sure they'll get to these issues eventually, but most comments I've seen so far recommending it are from people who have not actually used it or they would know of these major limitations.
The following CSS equivalent worked for me, using the "Custom CSS by Denis" Chrome extension[1]:
ytd-rich-grid-renderer div#contents {
/* number of video thumbnails per row */
--ytd-rich-grid-items-per-row: 5 !important;
/* number of Shorts per row in its dedicated section */
--ytd-rich-grid-slim-items-per-row: 6 !important;
}
I first tried it with the "User JavaScript and CSS" extension, but somehow it didn't seem able to inject CSS on YouTube. Even a simple `html { border: 5px solid red; }` would not show anything, while I could see it being applied immediately with the "Denis" CSS extension.
If someone can recommend a better alternative for custom CSS, I'd be interested to hear it. I guess Tampermonkey could work, if you have that.
The main alternative to LVGL seems to be TouchGFX[1], at least that's the one I've seen mentioned the most in conversations around UI libraries for microcontrollers.
As you wrote these aren't made for desktop apps, but you can use desktop apps to help with UI development using these libraries.
For LVGL there's SquareLine Studio[2], I used it a few years ago and it was helpful. For TouchGFX there's TouchGFXDesigner[3], I haven't used it myself and it seems to run only on Windows.
It's probably not slower than words, the rate for English pronunciation is something like 150-200 words per minute only.
That said, the "gibberlink" demo is definitely much slower than even a 28.8k modem (that's kilobit). It sounds cool because we can't understand it and it seems kinda fast, but this is a terribly inefficient way for machines to communicate. It's hard to say how fast they're exchanging data from just listening, but it can't be much more than ~100 bits/sec if I had to guess.
Even in the audible range you could absolutely go hundreds of times faster, but it's much easier to train an LLM that has some audio input capabilities if you keep this low rate and likely very distinct symbols, rather than implementing a proper modem.
But why even have to use a modem though? Limiting communication to audio-only is a severe restriction. When AIs are going to "call" other AIs, they will use APIs… not ancient phone lines.
It doesn't look like the orange and blue colors are part of the theme definitions, so the rendering library may be using some default values. This is why they are rendered in the same color on images using different theme files.
reply