The funny thing is that textbook economics has all of the answers about why laissez-faire market economics doesn't work as a foundation for economic policy. It's almost as if it's never been about making good policy and always about doing whatever is best for big businesses and the small number of wealthy people who stand to gain the most from minimizing consumer surplus.
The periscope style vector CRTs use in the arcade Battlezone were a claustrophobia and panic-inducing experience. Glowy unpixellated 3d, narrow field of vision. Unforgettably cool.
With a new agentic-lashup tearing across the internet every week, pointing the way to "gradient descent" software development, any purchasing manager worth their salt is going to ask some serious questions about their enormous SaaS bill before committing to another expensive long term contract. It follows that valuations must decline. Even if only because risks to moats have increased, but also because it makes sense to negotiate hard on pricing when there's fear in your counterparty.
Preposterous. Have you ever worked for a company as a programmer or for that matter as a manager? They don't just replace products ad hoc. There's an enormous amount of due diligence that goes into any new software product - making sure it fits the company, that it's secure, that it works properly... I recently worked at a small startup who spent on sales force @ $100,000 a year. I said you know what we could do this ourselves and you know what they said as every company I've ever said that to says? "We don't want to support it. We need to cover our ass. Everybody knows how to use this"
Personally I find it extremely rare that I need to do this given Polars expressions are so comprehensive, including when.then.otherwise when all else fails.
That one has a bit more friction than pandas because the return schema requirement -- pandas let's you get away with this bad practice.
It also does batches when you declare scalar outputs, but you can't control the batch size, which usually isn't an issue, but I've run into situations where it is.
because method chaining in Polars is much more composable and ergonomic than SQL once the pipeline gets complex which makes it superior in an exploratory "data wrangling" environment.
"revolutionary"? It just copied and pasted the decades-old R (previous "S") dataframe into Python, including all the paradigms (with worse ergonomics since it's not baked into the language).
No other modern language will compete with R on ergonomics because of how it allows functions to read the context they’re called in, and S expressions are incredibly flexibly. The R manual is great.
To say pandas just copied it but worse is overly dismissive. The core of pandas has always been indexing/reindexing, split-apply-combine, and slicing views.
It’s a different approach than R’s data tables or frames.
> allows functions to read the context they’re called in
Can you show an example? Seems interesting considering that code knowing about external context is not generally a good pattern when it comes to maintainability (security, readability).
I’ve lived through some horrific 10M line coldfusion codebases that embraced this paradigm to death - they were a whole other extreme where you could _write_ variables in the scope of where you were called from!
I can write code like:
penguin_sizes <- select(penguins, weight, height)
Here, weight and height are columns inside the dataframe. But I can refer to them as if they were objects in the environment (I., e without quotes) because the select function looks for them inside the penguins dataframe (it's first argument)
This is a very simple example but it's used extensively in some R paradigms
Dataframes first appeared in S-PLUS in 1991-1992. Then R copied S, and from 1995-1996-1997 onwards R started to grow in popularity in statistics. As free and open source software, R started to take over the market among statisticians and other people who were using other statistical software, mainly SAS, SPSS and Stata.
Given that S and R existed, why were they mostly not picked up by data analysts and programmers in 1995-2008, and only Python and Pandas made dataframes popular from 2008 onwards?
Exactly. I was programming in R in 2004 and Pandas didnt exist. I remember trying Pandas once and it felt unergonomic for fata analysis and it lacked the vast library of statistical analysis library.
cool, but doesn't sound that great when you close your eyes and just listen. Other synths beat this hands down especially at > $1000, and can easily bring in the physical world already, including live workflows. The issue is when we get into the physical analogue world, craftsmanship, materials, shape, often age, and of course the varied kinetic interactions with the sound solicitor, bring depth and richness which no little electrically-excited xylophone will ever get anywhere close to.
Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.
reply