Hacker Newsnew | past | comments | ask | show | jobs | submit | cmdrk's commentslogin

> too expensive for our internal server needs; not the right fit for our datacenter partners/customers

You and me both. They're doing neat stuff, but I wonder how many other potential customers feel that way too.

What is Oxide's market? It feels a bit like advanced alien technology that is ultimately a little too weird and expensive for most enterprises to adopt.


I always thought a company like Railway would be an Oxide customer. But Railway is building their own servers in their own datacenters. So I am really curious who is small enough to buy Oxide, but large enough to need Oxide?

The same sorts of customers that SGI used to sell to in the pre-cloud era. DoD. Oil and gas. Finance.

People with deep pockets and good reasons to want to keep certain parts of their infra very close to home. Also the kind of people that expect very highly skilled people to show up and get their in-house app running.

(I was an SGI HPC customer once. I still miss the old SGI. Sigh.)


Maybe DigitalOcean?

how does it compare to Nutanix?

Of topic, but where the hell did Nutanix come from? I've never heard of them until recently and all of a sudden, they are being marketed as a serious competitor to VMware etc.

They have been around for a long time and were one of the first to have a hyper-converged solution where all storage in the nodes is pooled and usable by any node. They also have their own hypervisor. You can get 4 nodes per 2U so pretty dense. In the datacenter my company uses a company had dozens of Nutanix boxes sitting in the hallway for months before they finally installed them. They are pretty notoriously expensive so only really used by companies with big IT budgets.

OK, must be one of those weird things where I just never noticed them. Super strange because I've been very aware of this space for decades.

They have been around for nearly 20 years. I viewed them as an also-ran until Broadcom decided they didn’t need any of us as VMware customers anymore. Now Nutanix seems like a viable path for on-prem VM workloads that need a new home for those who don’t want to part with an arm and a leg on licensing but can’t move to public cloud either. I’m not sure how much of that market Oxide can capture. Not sure Nutanix is still doing the hyperconverged hardware themselves anymore.

Nutanix / Oxide have a VERY different market / customer base.

I've been curious about Oxide for a year or two without fully understanding their product. People talking about the "hyperconverged" market in this thread gave me an understanding for the first time.

Given this, can you help me understand in what ways they are different?

When I went to the Nutanix website yesterday, the link showed that'd I'd previously visited them (not a surprise, I look up lots of things I see mentioned in discussions) but their website does an extremely poor job of explaining their business to someone who lacks foundational understanding, even once I'd started reading about "hyperconverged" just before.


If you want to KNOW the chain of custody for all of your OS and software, from the bootloader to the switch chip, and you want to run this virtualization platform airgapped, buying at rack-scale, you want Oxide. They are making basically everything in-house. That's government, energy, finance, etc. Customers that need descretion, security, performance, and something that works very reliably in a high-trust environment, with a pretty high level of performance.

Also check this out: https://www.linkedin.com/posts/bryan-cantrill-b6a1_unbeknown...

If you need a basic "vm platform", VMware, Proxmox, Nutanix, etc. all fit the bill with varying levels of feature and cost. Nutanix has also been making some fairly solid kubernetes plays, which is nice on hyperconverged infrastructure.

Then if you need a container platform, you go the opposite direction - Kubernetes/OpenShift and run your VMs from your container platform instead of running your containers from your VM platform.

As far as "hyperconverged"...

"Traditionally" with something like VMware, you ran a 3-tier infrastructure: compute, a storage array, and network switching. If you needed to expand compute, you just threw in another 1U-4U on the shelf. Then you wire it up to the switch, provision the network to it, provision the storage, add it to the cluster, etc. This model has some limitations but it scales fairly well with mid-level performance. Those storage arrays can be expensive though!

As far as "hyperconverged", you get bigger boxes with better integration. One-click firmware upgrades for the hardware fleet, if desired. Add a node, it gets discovered, automatically provisions to the rest of the configuration options you've set. The network switching fabric is built into the box, as is the storage. This model brings everything local (with a certain amount of local redundancy in the hardware itself), which makes many workloads blazing fast. You may still on occasion need to connect to massive storage arrays somewhere if you have very large datasets, but it really depends on the application workloads your organization runs. Hyperconverged doesn't scale compute as cheaply, but in return you get much faster performance.


https://news.ycombinator.com/item?id=30688865

Here is an answer by steveklabnik about this topic.


theyve been marketed as a serious competitor to vmware for 15 years. their sales reps mightve just not found you until recently. but we did a poc with them 10 years ago and i dont believe much has changed since

Is that what DO is using at the moment? Never heard about it

bcantrill gave a great talk many years ago about compute-data locality. would be nice to know if those ideas panned out for some customers, but it seems the world has by-and-large continued to schlep data back and forth.

it's too bad too. The concepts behind Manta were such a great idea. I still want tools that combine traditional unix pipes with services that can map-reduce over a big farm of hyperconverged compute/storage. I'm somewhat surprised that the kubernetes/cncf-adjacent world didn't reinvent it.


Random assortment of projects as time allows with the $JOB.

- Prototyping a cute little SSH-based sorta-BBS, inspired by the Spring '83 protocol, but terminal-centric rather than web-based. It's called Winter '78, and if we get another Great Blizzard this year, I'll be able to make some progress on it!

- Another prototype, for an experimental HPC-ish batch system. Using distributed Erlang for the control plane, and doing a lot of the heavy lifting with systemd transient units. Very much inspired by HTCondor as well as Joyent's (RIP to a real one) Manta.


CephFS implements a (fully?) POSIX filesystem while it seems that TernFS makes tradeoffs by losing permissions and mutability for further scale.

Their docs mention they have a custom kernel module, which I suppose is (today) shipped out of tree. Ceph is in-tree and also has a FUSE implementation.

The docs mention that TernFS also has its own S3 gateway, while RADOSGW is fully separate from CephFS.


My (limited) understanding is that cephfs, RGW (S3), RBD (block device) are all different things using the same underlying RADOS storage.

You can't mount and access RGW S3 objects as cephfs or anything, they are completely separate (not counting things like goofys, s3fs etc.), even if both are on the same rados cluster.

Not sure if TernFS differs there, would be kind of nice to have the option of both kinds of access to the same data.


Does their training corpus respect copyrights or do you have to follow their opt out procedure to keep them from consuming your data? Assuming it’s the latter, it’s open-er but still not quite there.


Your question is addressed in opening abstract: https://github.com/swiss-ai/apertus-tech-report/raw/refs/hea...

> Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for copyrighted, non-permissive, toxic, and personally identifiable content.


Afaik they respect robots.txt on crawl and later when using the data they re-check the robots.txt and will exclude the data if the new robots.txt was updated to deny access. They have further data filtering bit for that you better check the technical report.


I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.


TFA doesn't even draw an "equivalence" between those two positions; it merely misuses the word "skeptic" to mean "true believer in the Singularity."

TFA mourns the disappearance of true believers — those pundits saying LLMs would quickly achieve AGI and then go on to basically destroy the world. As that prediction became more obviously false, the pundits quietly stopped repeating it.

"Skeptics" is not, and never was, the label for those unbridled believers/evangelists; the label was "AI doomers." But an essay titled "Where have all the AI doomers gone?" wouldn't get clicks because the title question pretty much answers itself.


Exactly. “AI will take over the world because it’s dangerously smart” is the exact opposite of skepticism!

There are different arguments as to why AI is bad, and they’re not all coming from the same people! There’s the resource argument (it’s expensive and bad for the environment), the quality argument (hallucinations, etc.), the ethical argument (stealing copyrighted material), the moral argument (displacing millions of jobs is bad), and probably more I’m forgetting.

Sam Altman talking about the dangers of AI in front of Congress accomplishes two things: It’s great publicity for AI’s capabilities (what CEO doesn’t want to possess the technology that could take over the world?), and it sets the stage for regulatory capture, protecting the big players from upstarts by making it too difficult/expensive to compete.

That’s not skepticism, that’s capitalism.


I am also tired of this whole "hallucination" nonsense

These LLMs are buggy as hell. They say they can do certain things - reasoning, coding, summarizing, research, etc - but they can't. They mangle those jobs. They are full of bugs and the teams behind them have proved they can't debug them. They thought they could "scale laws" out of it but that proved as unfruitful as it was illogical.

What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?


> What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?

Cloud providers :)))


And chip producers


My biggest worry (and I still have some of those other concerns) is for school-age children using it instead of having to learn how to read for information and to write in their own words.

For everyone who argues, "naysayers said that letting schoolchildren use calculators would ruin their minds, but it didn't," how many people do you know who can make a good estimate of their share of a restaurant bill without pulling out their phones? Think about how that translates to how well they grasp at a glance what they're getting themselves into with Klarna, car loans, etc.


It also only seems to be interested in what tech CEOs have to say - people who were as disingenuous about their doom mongering as they were about their gold rush mentality.


just wait for the AI summary


The free version of Gemini 2.5 mini is great for this- doesn't need a transcript, apparently can analyse the video as well


Not to pick on you, but there are always posts like this in every Erlang thread. One is not strictly superior to the other, and the BEAM community benefits from the variety IMO.


I wouldn't mind the Erlang-dominated front page coming back :)


Seconded :)


Containerization is amazingly great for scientific computing. I don’t ever want to go back to doing the make && make install dance and praying I’ve got my dependency ducks in a row.


The only real feature of Docker is the ability to keep unmaintained software running as the world around it moves forward. Academics could do the same thing by just distributing read only VMs as well.


Surely containers make it far easier to deploy/distribute maintained software as it makes it so much easier for people to switch to a newer version without needing to worry about incompatible versions of libraries etc. that can break something else. It can be used for pinning a specific older version of software for "reasons", but I think that's less common.

Consider people using a containerised NgINX webserver as a reverse proxy - it's so much easier to keep it up to date compared to using a distribution's version of NgINX.


Containerization is great. Docker != containerization. Most people don't even know it runs qemu under the hood.


Do you have info on this ?

I only found that it can use qemu to build or run images for a different cpu architecture that your computer.

Why would it use qemu ? Docker is not doing virtualization but containerization


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: