Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know. Oracle was written to run on a VAXCluster with a shared disk with a seek time in the tens of milliseconds, and things like Postgres are kind of architected for that world. The world has changed a lot. Anything you could fit on disk in 02000 fits in RAM today, most of our programmable computing power is in GPUs instead of CPUs, website workloads are insanely read-heavy (favoring materialized views and readslave farms), SSDs can commit transactions durably in 0.1ms and support random reads while imposing heavy penalties on nonsequential writes, and spinning up ten thousand AWS Lambda functions to process a query in parallel is now a reasonable thing to do.

I think you could make reasonable arguments for SQLite, Postgres, MariaDB, Impala, Hive, HSQLDB, SPARK, Drill, or even Numpy, TensorFlow, or Unity's ECS, though those last few lack the "internal representation independence" ("data independence") so central to Codd's conception.

What's your opinion?






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: