The point that the trinity of logs, metrics and traces wastes a lot of engineering effort to pre-select the right metrics (and labels) and storage (by having too many information triplicate), is a good one.
> We believe raw data based approach will transform how we use observability data and extract value from it.
Yep. We have built quuxLogging on the same premise, but with more emphasis on "raw": Instead of parsing events (wide or not), we treat it fundamentally as a very large set of (usually text) lines and optimized hard on the querying-lots-of-text part. Basically a horizontally scaled (extremely fast) regex engine with data aggregation support.
Having a decent way to get metrics from logs ad-hoc completely solves the metric cardinality explosion.
Many companies are having trouble to even keep Prometheus running without it getting OOM killed though.
I understand and agree with the problem this is trying to solve; but the solution will rival the actual business software it is observing in cost and resource usage. And hence, just like in quantum mechanics, observing it will drastically impact the event.
Nah, it's fine. Storage of raw logs is pretty cheap (and I think this is widely assumed). For querying, two problems arise:
1. Query latency, i.e. we need enough CPUs to quickly return a result. This is solved by horizontal scaling. All the idle time can be amortized across customers in the SaaS setting (not everyone is looking at the same time).
2. Query cost, i.e. the total amount of CPU time (and other resources) spent per data scanned must be reasonable. This ultimately depends on the speed of the regex engine. We're currently at $0.05/TB scanned. And metric queries on multi-TB datasets can usually be sampled without impacting result quality much.
It's not the storage cost; it's the computational load (memory, CPU, sometimes network) of gathering thousands and thousands of metrics by default, most of which go unused.
Wide events seems like it would require more memory and CPU to combine and more bandwidth due to size.
I've implemented services with loggers that gather data and statistics and write out just one combined log line at the end. It's certainly more economical in regard to dev time, not sure how "one large" compares to "many small" in reality resource-wise.
I think the license has a loophole where it forgot to include the blood signature on my computer screen, extracted with a carbon-fiber blade made from Stallman's mane.
I had basically the same problem as OP some years back. Then I wrote down, for each and every task, how much I'd value having already completed it (including how much I'd value having experienced doing it), and how long I estimated it'd take. This immediately gives value / hour. Add some categorization (e.g. to only work on work tasks during the week), and voila: Auto-prioritization. Helped a lot with getting the actually important things done and get reminded to do useful long term stuff when idle time arose.
FWIW, I'm currently rebuilding it as a web-app, if you want to try: https://quuxtodo.com/
I think it's a good thing (tm) that
{ x sin } could be valid code with x pushing the value of x whereas sin is executing the value of sin.
Theoretically speaking, I need at least the deff and defq distinction, otherwise { } could not be invoked during "parse" time. The defv case could in principle be removed, but as * needs to execute (otherwise nothing ever will), execution would need to be the default or I would need to decide based on dynamic type. The first option would mean a lot of superfluous {} (or | actually) around normal data variables, the second one would destroy the nice similarities between arrays, strings and functions with integer domains.
It is self targeted snark allright. Not just not doing it with LISP, but also ignoring pretty much everything I learned about compiler building in university.
I may only be a few years out of university, but over the last couple years I've learned that what I studied in university falls into three categories:
1) Incredibly out of date.
2) Far newer than what my job requires me to use.
3) Existing in some ideal world that matches nothing anyone actually created or uses in industry.
And yes, it's possible (and not rare at all) for both (1) AND (2) to be true at the same time.
All of above - at the same time - ends up in the middle of Gartner's hype cycles: a solution has been found 20 years ago and people are searching for problem it solves ever since.
Define "inspired". I became aware of Forth while implementing Elymas. But the decision to make it stack based was actually because I was too lazy (after some attempts) to implement a correct LALR-parser-generator.
> We believe raw data based approach will transform how we use observability data and extract value from it. Yep. We have built quuxLogging on the same premise, but with more emphasis on "raw": Instead of parsing events (wide or not), we treat it fundamentally as a very large set of (usually text) lines and optimized hard on the querying-lots-of-text part. Basically a horizontally scaled (extremely fast) regex engine with data aggregation support.
Having a decent way to get metrics from logs ad-hoc completely solves the metric cardinality explosion.