Echopraxia is great. I never understood those who thought it was disappointing. Blindsight is wonderful, but Echopraxia is possibly the more inventive one. It certainly pulls the narrative in a different direction.
I also really, really recommend The Freeze-Frame Revolution. It's about the crew on an starship trying to stop the rogue (sort of) AI that runs everything, the twist being that the crew is constantly under surveillance and must periodically hibernate in shifts for months or years at a time. It's a novella plus a handful of short stories set before and after the novel (all available for free on Peter Watts' website). Be warned, it's one bleak, dark universe.
Also, don't miss out on "The Colonel" (also on his website), a standalone short story that also happens to be a direct sequel to Blindsight.
Well, it worked for Amazon — Berkeley DB was used extensively there as the makn database, right from the beginning. I remember talking to an ex-Amazon engineer in 2006 who said BDB was still the main database used for inventory, and complained that everything was a mess, with different teams using different tech for everything. Around that time Amazon made DynamoDB to solve some of that mess — and it sat on top of BDB.
It worked well for Amazon because they kept it within a tight operating envelope. They used it to persist bytes on disk in multiple, smaller BDBs per node. This kept it out of trouble. They also sidestepped the concurrency and locking problems by taking care of that in the layers above. It was used more like SSTables in BigTable.
They phased out BDB before DynamoDB was launched. Some time between 2007 and 2010. By the time DynamoDB launched as a product in 2012(?), BDB was gone.
Can verify. When I started in the catalog department in '97, "the catalog" was essentially a giant Berkeley DB keyed on ISBN/ASIN that was built/updated and pushed out (via a mountain of Perl tools) to every web server in the fleet on a regular cadence. There were a bunch of other DBs too, like for indexes, product reviews, and other site features. Once the files landed, the deploy tooling would "flip the symlinks" to make them live.
Berkeley DBs were the go-to online databases for a long time at Amazon, at least until I left at the turn of the century. We had Oracle databases too, but they weren't used in production, they were just another source of truth for the BDBs.
That was a very intentional strategy. In hindsight, not a good one, of course, but Plus and its integration across the whole company was blessed by Page and Brin, who were quietly panicking that Facebook could eat Google's lunch by becoming the "start page of the Internet" the moment they integrated search. Which they never did and never appear to have wanted.
It's reliable except when it's not. I'm using Mojave, and currently fighting a bug where a local snapshot gets stuck. When I list the local snapshots, I see the old one, then a gap of several days, and then additional snapshots.
From what I can tell, this snapshot is preventing space reclamation. The last month or so, I've constantly run out of disk space even when not doing anything special. As in actually run out of disk space — apps start to become unresponsive or crash, and I get warning boxes about low disk space. When you run low, the OS is supposed to reclaim the space used by snapshots, but I guess it doesn't happen,
The stuck snapshot can't be deleted with tmutil. I get a generic "failed to delete" error. The snapshot is actually mounted by the backup daemon, but unmount also fails. The only solution I've found is to reboot. Then I get 200-300GB back and the cycle starts again, with snapshots getting stuck again.
I'm considering updating to Tahoe just because there's a chance they fixed it in that release.
It's probably going to be acquired. The last effort to commercialize the TUM (Technical University of Munich) database group's work was acquired by Snowflake and disappeared into that stack.
CedarDB is the commercialization of Umbra, the TUM group's in-memory database lead by professor Thomas Neumann. Umbra is a successor to HyPer, so this is the third generation of the system Neumann came up with.
Umbra/CedarDB isn't a completely new way of doing database stuff, but basically a combination of several things that rearchitect the query engine from the ground up for modern systems: A query compiler that generates native code, a buffer pool manager optimized for multi core, push-based DAG execution that divides work into batches ("morsels"), and in-memory Adaptive Radix Tries (never used in a database before, I think).
It also has an advanced query planner that embraces the latest theoretical advances in query optimization, especially some techniques to unnest complex multi-join query plans, especially with queries that have a ton of joins. The TUM group has published some great papers on this.
> It also has an advanced query planner that embraces the latest theoretical advances in query optimization, especially some techniques to unnest complex multi-join query plans, especially with queries that have a ton of joins. The TUM group has published some great papers on this.
I always wondered how good these planners are in practice. The Neumann/Moerkotte papers are top notch (I've implemented several of them myself), but a planner is much more than its theoretical capabilities; you need so much tweaking and tuning to make anything work well, especially in the cost model. Does anyone have any Umbra experience and can say how well it works for things that are not DBT-3?
Umbra is not an in-memory database (Hyper was). TUM gave up on the feasibility of in-memory databases several years ago (when the price of RAM relative to storage stopped falling).
Thanks for the correction. My understanding was that it was still in-memory but "fell back on" disk. ART indexes were touted as one of the novel aspects of Umbra, and my understanding is that ART doesn't work well as an on-disk data structure, so I guess I need to read up on the architecture now.
No, again, ART was Hyper's specialty. Because you're right, ART specializes at in-memory workloads it is not amenable to paging.
I believe Umbra is heavily BTree based, just like its cousin LeanStore.
One of its specific innovations is its buffer pool which uses virtual memory overcommit and multiple possible buffer sizes to squeeze better performance out of page management.
My understanding is the research projects LeanStore & Umbra -- and now I assume the product CedarDB based on the people involved, etc. -- are systems based on the observation that a) existing on-disk systems aren't built well with the characteristics of nVME/SSD drives in mind b) RAM prices up to this year were not dropping at the same rate as they were early in the 2010s, meaning that pure in-memory databases were not so competitive, so it's important to look at how we can squeeze performance out of systems that perform paging. And of course in the last 6 months this has become extremely relevant with the massive spike in RAM prices.
That and the query compilation stuff, I guess, which I know less about.
In-memory DBs were always a dead-end waste of time. There will always be bigger slower cheaper secondary storage, no matter how cheap main memory gets. And workloads always grow to exceed the available main memory (unless a system is dying). And secondary storage will always be paged, because it's too inefficient to address it at smaller granularity.
That's just reality, and anyone who ignores reality isn't going to get very far.
Thanks for the corrections and info. Will check out that video. I could have sworn I had read these things about Umbra, but I suppose it was HyPer. Both interesting designs!
I am currently writing an on-disk B+tree implementation inspired by LMDB's blazing fast memory-mapped CoW approach, so I'm quite keen to learn what makes TUM's stuff fast.
Yeah I think the way Umbra was pitched when I watched the talks and read the paper was as more as "hybrid" in the sense that it aimed for something close to in-memory performance while optimizing the page-in/page-out performance profile.
The part of Umbra I found interesting was the buffer pool, so that's where focused most of my attention when reading though.
I've made a tiny SwiftUI app. It was really difficult to figure out the memory leaks. In fact, I have leaks that I still haven't been able to find. For some reason the heap is fine, but the app continues to allocate virtual memory.
I've thrown Claude and Gemini at the app to try to analyze the code, had them use vmmap and Instruments, asked them run all of the code in a loop to reproduce the leakage — and still it leaks, slowly, tens of megabytes per day.
I'm sure it's something simple starting me in the face. But the fact remains that Swift's sort-of-automatic-but-not-quite memory model still makes it much harder to reason about memory than Rust or even Go.
I agree, but I think that it's difficult to spot memory leaks in SwiftUI because it's such a high-level abstraction framework. When working with the Cocoa and Cocoa Touch libraries, it's so much easier to find.
And of course, Apple's UI frameworks != Swift the language itself.
hunting dangling references in a reference counted system is like that.... that's all I can guess is going on here. Good hunting! I wonder if there's a resource debugger? So far when I have really had to look, xcode was suffiicent... but there's likely better out there for finding this kind of thing.
I experienced a similar problem, and upon further investigation, I discovered that it occurred in the parts of the code where Swift and Objective-C are bridged. The problem seemed to stem from either the bridging or an issue within the Objective-C code itself
Perhaps try again and analyze for reference cycles between objects. I agree that ARC has painful subtleties. It existed prior to Swift, being introduced in Objective-C, and is definitely not a perfect memory model.
Personally I avoid using SwiftUI except in bite size chunks like collection view cells. It’s great for that kind of use case but doesn’t scale up well.
I wasn’t of the mind that AppKit/UIKit really needed replacing in the first place, though…
I'm sorry but what exactly are you doing? This is the first time I've ever heard any of this type of reasoning, and well, the fact that you're using AI makes me think you have no clue what you're actually talking about.
If its a reference cycle, instruments will find it. If it is a leak, instruments will find it. However, you seem to be worried about an implementation detail on how you get memory from your CPU which the mach kernel handles for you, and is something you don't quite grasp.
please don't reply with "I asked stupid generator", seriously, what is the actual issue you have?
- it adds locking to almost every line of code using classes
- anytime you start holding complex references to classes you get cycles which will not be released
- it’s extremely hard to debug. You have to capture memgraphs at runtime and track dependencies
- it coexists with old systems like auto release pools so resources are still not released when all references go away
Try to implement a linked list in swift and you’ll get a sense of how absurd this is. And remember linked lists are a special case of graphs.
It does not count under private memory, so I assume mapped but unused. The last time I asked Claude, it said confidently it was a bug in Swift's networking stack, which I doubt.
That’s the great thing about indiscriminately scraping the internet for knowledge.
I’ll bet Claude was channeling some Reddit guru dripping with swagger born from knowing their understanding of coding is far more advanced than most big-shots in the field— especially impressive because they only wandered into /r/LearnProgramming for the first time several months prior.
Awesome, can you also grab a memgraph or something when it’s leaked a bunch of memory (and maybe put it in a GitHub issue)? I can try it but I don’t really use CircleCI so I’m not confident I can reproduce the issue.
I mean, even if valgrind ran on MacOS, it may still not give anything meaningful because the debug symbols are probably not going to be the same as that generated by GCC, and even if they were the same, there's still gonna be a bunch of symbols "missing" because of internal swift name-mangling, and even if that wasn't the case, the emitted code might just not be ABI compatible with C anyway.
I'm working on Valgrind on macOS, integrating Louis Brunner's work and trying to add a few more fixes. In 2025 support for macOS Intel 10.14, 10.15 11 and 12 was added. Intel macOS 13 is a bit harder of a nut to crack. And I have lots of issues with ARM, particularly building and testing on anything older that macOS 15.
Swift name-mangling will be an issue. Valgrind's name demangler comes from GNU binutils libiberty which does not support Swift AFAIK.
Anyone who liked this article and like (or are curious about) the poem should check out the great Fiona Shaw's reading of it [1].
You can find recordings by many fine actor such as John Gielgud, Alec Guinness, and many others, and they tend to be dull, monotonous affairs. Shaw is very different. She's is an incredible actress, and since the 1990s she's been perfecting the poem as a kind of one-woman show where she reads it as the voices of many characters, which is what the poem (as I understand it) is.
Have you ever listened to Eliot reading it? Just the worst. "Apreel is the crewellest month..."
My thing here though is: this is awesome, Shaw's reading, but is it right? I feel like she's trying to make a coherent character reading at times out of passages deliberately written not to have a clear narrator.
(I write this in the spirit of every thread needing a certain titration of not knowing what the hell they're talking about, as an invitation to those who do, and that inviting cluelessness is the purpose I serve here.)
One of the most annoying things I ever learned about T S Eliot is that he was born in Missouri and didn't move to the UK until his late 20s and just entirely made up that accent.
Stipulating that he did change accents, "just entirely made up" is a strong accusation, considering that linguistic accommodation is a thing. Compare Calpurnia's theory of code switching from ch.12 of "...Mockingbird": https://archive.org/details/dli.bengal.10689.12863/page/n134...
> “That doesn’t mean you hafta talk [AAVE] when you know better,” said Jem.
> Calpurnia tilted her hat and scratched her head, then pressed her hat down carefully over her ears. “It’s right hard to say,” she said. “Suppose you and Scout talked colored-folks’ talk at home—it’d be out of place, wouldn’t it? Now what if I talked white-folks’ talk at church, and with my neighbors? They’d think I was puttin’ on airs to beat Moses.”
Trolling*-wise, I wonder if there's a cousin of the "Almost Politically Correct Redneck", an "Accidentally Woke Reactionary" perhaps?
(No, I don't get any memes via RSS. It'd be grand, though. Imgur post-acquisition [and for all I know, even pre? I had devtools turned on one day when I visited, and never went back...] is a tracking hellhole)
* one way I can tell that I was born and bred in US of A: I'm the only person nervously laughing at the did-they-really-just-say-that during a VO Tarantino film in the local theatre...
It looks like you concluded they were being confrontational when they were only being expressive? Whereas the rest of the audience thought the movie was only being expressive? To what extent did that expose your erstwhile localization (to FR) while the old country had shifted in place? Do I miss anything else?
I'm pretty sure Tarantino (sharing the culture in which I was steeped) meant to be confrontational, but probably the rest of the audience, being blissfully unaware of things "everyone knows" you're not supposed to actually say, thought expressive more than transgressive.
FWIW it was "Hateful 8" (2015), so I don't think the old country has shifted out from under me — unless it's either all been in the last decade, or younger audiences there are also not nervously laughing? Guess I'll have to make some inquiries...
>younger audiences there are also not nervously laughing
This, probably. Maybe feeling a bit of "cringe" (trendy word), but not the same reaction as yours
Note Tarantino has a knack for expressivity yori confrontation:
"And I was shocked when I wrote it, because that’s not how I feel. But I was just doing what a writer does: I was being the character and that came out of Chris. This isn’t my philosophy, of course, but that is Chris’s philosophy [but] I didn’t judge Chris."
He's a delightfully arch character, really. His penchant for camouflage is why Pound nicknamed him Old Possum.
I can't recommend Hugh Kenner enough on the modernists. Eliot is one of the main characters of The Pound Era, and the star of The Invisible Poet.
This is from The Pound Era:
But Eliot was a great joker. After jugged hare at the Club ("Now there is jugged hare. That is a very English dish. Do you want to be English; or do you want to be safe?"); after the jugged hare and the evasions, he addressed his mind to the next theme. "Now; will you have a sweet; or ... cheese?" Even one not conversant with his letter to the Times on the declining estate of Stilton [Nov. 29, 1935, p. 15] would have understood that the countersign was cheese. "Why, cheese," said his guest; too lightly; one does not crash in upon the mysteries. There was a touch of reproof in his solicitude: "Are you sure? You can have ice cream, you know." (At the Garrick!)
No, cheese. To which, "Very well. I fancy ... a fine Stilton." And as the waiter left for the Stilton, Eliot imparted the day's most momentous confidence: "Never commit yourself to a cheese without having first ... examined it."
The Stilton stood encumbered with a swaddling band, girded about with a cincture, scooped out on top like a crater of the moon. It was placed in front of the Critic. (" Analysis and comparison," he had written some 40 years earlier, "Analysis and comparison, methodically, with sensitiveness, intelligence, curiosity, intensity of passion and infinite knowledge: all these are necessary to the great critic.") With the side of his knife blade he commenced tapping the circumference of the cheese, rotating it, his head cocked in a listening posture. It is not possible to swear that he was listening. He then tapped the inner walls of the crater. He then dug about with the point of his knife amid the fragments contained by the crater. He then said, "Rather past its prime. I am afraid I cannot recommend it."
He was not always so. That was one of his Garrick personae. An acquaintance reports that at dinner in Eliot's home "an ordinary Cheddar" was "served without ceremony."
The Stilton vanished. After awing silence the cheese board arrived, an assortment of some half-dozen, a few of them identifiably cheeses only in context. One resembled sponge cake spattered with chocolate sauce. Another, a pockmarked toadstool-yellow, exuded green flecks. Analysis and comparison: he took up again his knife, and each of these candidates he tapped, he prodded, he sounded. At length he segregated a ruddy specimen. "That is a rather fine Red Cheshire ... which you might enjoy." It was accepted; the decision was not enquired into, nor the intonation of you assessed.
His attention was now bent on the toadstool-yellow specimen. This he tapped. This he prodded. This he poked. This he scraped. He then summoned the waiter.
"What is that?"
Apologetic ignorance of the waiter.
"Could we find out?"
Disappearance of the waiter. Two other waiters appear.
"?"
"--------."
He assumed, at this silence, a mask of Holmesian exaltation:
"Aha! An Anonymous Cheese!"
He then took the Anonymous Cheese beneath his left hand, and the knife in his right hand, the thumb along the back of the blade as though to pare an apple. He then achieved with aplomb the impossible feat of peeling off a long slice. He ate this, attentively. He then transferred the Anonymous Cheese to the plate before him, and with no further memorable words proceeded without assistance to consume the entire Anonymous Cheese.
That was November 19, 1956. Joyce was dead, Lewis blind, Pound imprisoned; the author of The Waste Land not really changed, unless in the intensity of his preference for the anonymous.
Hugh Kenner is good on a surprisingly wide range of things. This is a publisher's description of a book called The Counterfeiters, first published around 1968:
"Wide-ranging enough to encompass Buster Keaton, Charles Babbage, horses, and a man riding a bicycle while wearing a gas mask, The Counterfeiters is one of Hugh Kenner's greatest achievements. In this fascinating work of literary and cultural criticism, Kenner seeks the causes and outcomes of man's ability to simulate himself (a computer that can calculate quicker than we can) and his world (a mechanical duck that acts the same as a living one)."
Kenner also co-authored a relatively early text generator, called Travesty, that would analyze a source text in terms of n-grams (e.g., 4-letter combinations) and then generate something new to match it. This was published in Byte magazine in 1984.
"A Travesty Generator for Micros" doesn't ring a bell, so thanks for the pointer. If it wasn't collected in Mazes or Historical Fictions it'll be one of the few things of his I haven't read yet.
Eliot's reading is fascinatingly horrible. I had the same traumatic experience hearing William Gibson reading Neuromancer, which comes across as a kind of parody.
As for "is it right?" — well, it's obviously one person's interpretation, and I would say Eliot's own performance should count as Exhibit 1 in the age-old debate about whether the author is the best interpreter of their own work!
I'd say, though it's certainly a debatable point, that it's precisely because the passages are deliberately written not to have a clear narrator that there is no "right" reading, but rather a multitude of interpretations of which Shaw's is as valid as many others. That's the attitude I'd bring to it, anyway.
It's a myth that Coca-Cola is a closely held secret, though. Any food flavoring specialist can reconstruct the flavor of Coke almost exactly.
A few years ago I (not a specialist!) made lots of batches of OpenCola, which is based partly on the original Pemberton recipe, and it comes so close that nobody could realistically tell the difference. If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.
The tricky piece that nobody else can do is the caffeine (edit: de-cocainized coca leaf extract) derived from coca leaves. Only Coke has the license to do this, and from what I gather, a tiny, tiny bit of the flavour does come from that.
> If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.
I've not participated in Cola tasting, but assuming fresher tastes better isn't really a safe assumption. Lots of ingredients taste better or are better suited for recipies when they're aged. I've got pet chickens and their eggs are great, but you have to let them sit for many days if you want to hard boil them, and I'd guess baking with them may be tricky for sensitive recipies.
Anyway, even if it does taste better for whatever that means, that's not meeting the goal of tasting consistently the same as Coke, in whichever form. If you can't tell me if it's supposed to taste like Coke from a can, glass bottle, plastic bottle, or fountain, then you've told me all I need to know about how close you've replicated it.
I think my point flew past you: If I can make a 99% clone of Coke in my kitchen, any professional flavoring pro will do it 100%. The supposed secret recipe isn't why Coke is still around, it's the brand.
And by fresh I do mean: The OpenCola is full of natural essential oils (orange, neroli, cinnamon, lime, lavender, lemon, nutmeg), and real natural flavor oils have a certain potent freshness you don't get in a mass-produced product.
I'm merely making the point that there's nothing magical about the recipe. Anyone wanting to truly replicate it for mass production can simply use commodity flavor compounds.
Coca leaves contain various alkaloids, but not caffeine. Coca Cola gets its caffeine from (traditionally) kola nuts, and (today, presumedly) the usual industrial sources.
You had better luck than I did, I tried my hand at making Open Cola, put around $300 into it (between the carbonization rig and essential oils primarily), and while I'd say it was "leaning towards coke", I would also definitely say that nobody would mistake it for coke.
I noticed it was incredibly important to get the recipe mixture exactly right, because even a slight measurement error resulted in weirdly wrong flavors.
I did my OpenCola experiment in the company office together with a colleague, and we ended up hooking it up to a beer tap, with a canister of CO2. I'm proud to say the whole office really got into it.
I've tried the native tab support several times, and my impression is that it's good for very little.
It may be OK for certain types of document-oriented apps, but there's a reason most apps (Chrome, iTerm, even Safari uses its own native tabs, I believe) don't use it. It's underbaked and awkward to fit into a model where your "tab data model" doesn't neatly fit the document data model that the framework wants.
I recently made an app where I wanted tabs, and I just ended up abandoning tab support for this reason, and adding a todo item to use an off-the-shelf tab UI library in the future.
Aren't you confusing two things? When Nixon suspended Bretton Woods, he wasn't refusing to repatriate gold, he was reneging on the promise that dollars could be converted into gold (one of the reasons being that it no longer had enough physical gold to redeem).
So countries like France held dollars and could no longer get them converted to gold. From my reading, before 1971, all the requests to convert dollars by France, Switzerland, and the UK were all honoured, contributing to the crisis. I don't believe anyone were refused conversion until 1971. And even then, everyone still had their dollars. All they lost was the ability to redeem for gold according to the fixed Bretton Woods price. Dollars could still be used to buy gold at market value.
But this article is talking about repatriating gold owned by other governments but physically held in vaults by the US. A bunch of countries (including Germany) have already repatriated vast amount that were housed by the US, and those requests have never been refused. The Federal Reserve is said to even keeps the bars owned by other countries physically separate, rather than commingled.
That's a refusal to repatriate, since other countries had sent their gold to the US to exchange for dollars with the understanding they could exchange it back at an time.
Is that happened? I'm not an expert on the history of Bretton Woods, but my understanding is countries that sent gold to be stored in the US retained ownership over it, and repatriation was never refused.
If they had converted gold to dollars, this only suspended their ability to "rebalance" between gold and dollars, and no value was lost.
The dollars were worth a small fraction of the gold. It's true that no value was lost — because that value was transferred from the other countries to the US.
That doesn't sound right. Money is fungible. Anyone who missed the "window" to convert in 1971 (which major countries like UK, Germany, and France didn't, as far as I can see) still had their dollars. The US-domiciled gold that backed the dollars had always belong the US; so the dollars being an "IOU" makes sense in a colloquial sense, I guess, but it's a gross simplification that misses the broader point about how Bretton Woods affected monetary policy. It shifted the US into being able to maintain a permanent trade deficit, and the US was free to devalue the dollar (effectively the world's default currency) without repercussion, which is good for the US but not for everyone else.
The dollars were worth much less than the gold. As soon as you couldn't exchange dollars for gold any more, the value of a dollar fell by a factor of several to reflect that.
I also really, really recommend The Freeze-Frame Revolution. It's about the crew on an starship trying to stop the rogue (sort of) AI that runs everything, the twist being that the crew is constantly under surveillance and must periodically hibernate in shifts for months or years at a time. It's a novella plus a handful of short stories set before and after the novel (all available for free on Peter Watts' website). Be warned, it's one bleak, dark universe.
Also, don't miss out on "The Colonel" (also on his website), a standalone short story that also happens to be a direct sequel to Blindsight.
reply