"Why do we use scientific languages?" is very true.
Applause to the Julia contributors for their work on this innovative language with great out of the box support for modern computer chip architectures. However, they have fibbed to build up momentum, particularly their performance benchmarks. The tests are of compiled Julia with OpenBLAS for the benchmarks against out of the box versions of the other languages. Also, the benchmark code in other languages is with a style that is among the slowest implementations for each language. No seasoned programmer in any of those languages would write code in such a way.
It does seem that there is co-ordination to get Julia posts most attention, timing and upvoting.
Nonetheless, credit where it's due. It should become an awesome language, marketing hacks notwithstanding.
We use whatever BLAS is linked to in a commonly available official distribution. Julia was one of the first to take this seriously and bundle a high performance BLAS as the default - and I think more projects are following our lead and doing the same. Also, only one benchmark actually uses BLAS.
As for co-ordination on Julia posts - there is none. We submit all our blog posts to HN, and while some do reach the front page, many others do not.
It would be interesting to see what happens if each language is compiled to the benchmark server, linked to the same BLAS and an expert implementation of the tests in each language was allowed; a scientifically valuable experiment. Just to remove all doubt about the relativities of performance :-)
You're right, using a doubly-recursive algorithm [1] for `fib` is a terribly naive and uncharacteristic way to write it in any language, including Julia. But it's a wonderful proxy for the cost of a function call. It's also quite scientific — there's an absolute truth for the correctness of an implementation. All languages must use a doubly-recursive scheme.
It all depends on what you want to measure. The whole point of the micro-benchmark suite is to test very specific language primitives. I'd argue that the current set of benchmarks are more valuable for that than an "expert" implementation would be — that may end up simply testing the cleverness or resourcefulness of the expert.
The issue of primitive performance seems to be in the background of how algorithms are implemented, which BLAS is running, compilation to server architecture, etc. One might measure the performance of 'very specific language primitives' directly for those language primitives. Stripping out confounding factors feels fundamental.
Of course, such a benchmark it might not have the same marketing hue as claiming that Julia is 553 times faster than Matlab at parsing an integer, for example.
Applause to the Julia contributors for their work on this innovative language with great out of the box support for modern computer chip architectures. However, they have fibbed to build up momentum, particularly their performance benchmarks. The tests are of compiled Julia with OpenBLAS for the benchmarks against out of the box versions of the other languages. Also, the benchmark code in other languages is with a style that is among the slowest implementations for each language. No seasoned programmer in any of those languages would write code in such a way.
It does seem that there is co-ordination to get Julia posts most attention, timing and upvoting.
Nonetheless, credit where it's due. It should become an awesome language, marketing hacks notwithstanding.