OP is here. I've built Comrade after years of managing clusters with Kopf, Cerebro, and ElasticHQ. I've always felt like those tools built for smaller clusters with low latency to the host. Comrade takes a lot of inspiration from Cerebro
Why do we need to choose one of monolith and microservices? What about simply "services"? Monolith doesn't have to be split into 50 microservices, it can be split to 3 services
I think there is a lot of grey area in this debate of microservice vs monolith. There are more aspects to this than how many executables you need to start before production is up.
According to most of the intent of the definition of "microservice", we do have many of these within our monolith. There is a nice big folder in our core project called "Services" and within this lurks such things as UserService, SettingService, TracingService, etc. Each with their own isolated implementation stack+tests, but common models and development policies. All of these services are simply injected into the DI of the hosting application. We are injecting approximately 120 dependencies into our core project and it is working flawlessly for us in terms of scalability and ease of implementation. Microsoft DI in AspNetCore is awesome. For those trickier cases we will usually just pass IServiceCollection to whatever needs to arbitrarily reference the bucket of injected services (e.g. rules engines).
I think you can have the best of both microservices and monoliths at the same time if you are clever with your architecture and code contracts.
Yeah, I don't have much experience with microservices but the one time I worked with them they were a nightmare and I think the major cause was there were too bloody many of them.
I feel like if you end up talking about eventual consistency or needing ids to be passed around you've surely built it wrong and you've split what should be a single service into a mess just to feel cool having more services and needing the hyped new tools to manage them.
The touted benefit that you can scale the bottleneck separately also suggest the need for 1 monolith at most 1 or 2 services. If you have more than that many bottlenecks the code is doomed anyway presumably?
Yeah, seems like people like extremes. I've seen a website split into multiple services that are largely independent and to make things manageable between teams the monoliths were split into smaller libraries.
The big thing that people often underestimate is the complexity of facilitating communication between micro services. Not only you need to plan the API well, but also figure out the routing and you also get overhead.
No, tiers comprise slicing the application horizontally by technology, basically, ie. "All database stuff here" and "all frontend stuff here". To me that's like "let's put all the house sinks in one room to ease repairs for the craftsman".
Services is more like feature folders? Where you define everything you need related to a VERTICALLY sliced part of your application, ie. Products, or whatever.
That looks great, thank you! I'll try it out and send you any feedback I have. Also, you might want to link to Sonic in the README, for people unfamiliar with it.
So all you need is:
Follow the first 3 steps in the landing page (install the app on your account) and then add --log-junit unit-tests.xml to your test step, and finally send the file python <(curl https://raw.githubusercontent.com/tomato-bot/tomato/master/t...) unit-tests.xml
If you click on other cells, the values keep increasing. Otherwise, it eventually gives NaN. If you change the steps and first initialize A2, then it converges to NaN immediately.
> I guess you would have to sanitize when you save and/or load the spreadsheet
Sanitizing? No chance. Either you have a dedicated expression parser, or you run it directly throgh eval. There is no reliable middle ground. Decades of security failures of so-called "sanitizers" show this pretty clearly.
(Even if you manage to create a perfect sanitizer today, wait a few months, new features are added to the browser, and new loopholes will appear out of nothing.)
But that may be missing the point, because if you want more code quality, more safety and more features, of course you need more code. This demo illustrates the other way around: If you allow for dirty hacks, you can get away with a surprisingly small amount of code.
Blacklisting (checking that the input doesn’t contain any of a fixed set of known troublemakers) is asking for trouble, but whitelisting (checking that the input doesn’t contain anything but a fixed set of known safe constructs) should be fine.
If your whitelist allows a wide range of constructs, it isn’t much easier to check that an input is in the allowed set than to write an evaluator that is limited to that set, so it may not be much of an advantage to have a more powerful ”eval” lying around.
Is there really no middle ground? Sanitizers fail because they try to salvage the clean part, only blacklisting some possible inputs. But what if you turn it around? Only send to eval what fits through a matcher for a very small subset of the language. The matcher can even allow invalid inputs if you know that eval will safely reject them (think unbalanced brackets). That matcher will be much easier and safer to implement than a full parser/interpreter for the same subset.
> Only send to eval what fits through a matcher for a very small subset of the language
That's exactly what I meant by "dedicated expression parser".
(Not sure why you name it "matcher", though. Please be aware that a regex-based matcher will almost certainly fail for that task. You usually want a grammar, i.e. parser, which is more powerful, and shorter, and easier to read and to verify.)
EDIT: To those who downvoted my clarification, do you care to elaborate?
There is a difference between a recogniser, which answers the question "does this belong to the language", and a parser, which outputs a data structure. All you need here is a recogniser, and then pass the string through to eval which will do it's own parsing. Recognisers are smaller than parsers.
If you relax the rules, as the gp said, you can get away with something like a regex to do the job. While regex's are bad at context free grammars [0], if you forgo balancing brackets etc. a regex will do just fine.
All that said, with the crazy things JS lets you do [1] a recogniser for a relaxed language is likely to still let potentially dangerous code though.
[0] Yes, with most regex engines you can parse CFGs, but it's not nice, and at that point you _do_ want a grammar based parser
Please note that the term "recogniser" is very fuzzy, it could mean a regex matcher, a parser or even a turing-complete thing. Not very helpful for this discussion.
> a parser, which outputs a data structure
Please note that a parser is not required to output a data structure. In classic computer science, the parser of a context-free grammar usually has a minimal (boolean) output: it just either accepts or rejects the input.
If your "recognizer" is too weak (e.g. regexes), you risk not properly checking the language (see below).
If your "recognizer" is too powerful (e.g. turing complete), you risk tons of loopholes which are hard to find and hard to analyze. You probably won't be able to prove the security, and even if you do, it will probably be hard work, and even harder for others to follow and to verify.
But if your "recognizer" is a parser, you have a good chance to succeed in a safe way with minimal effort. Proving security is as simple as comparing your grammar with the ECMAscript standard.
> you can get away with something like a regex to do the job [...] with the crazy things JS lets you do a recogniser for a relaxed language is likely to still let potentially dangerous code though
That's exactly my point: Sure, you can try to build a protection wall based on regexes, but there's no reason to do that. Use a proper parser right away and don't waste your time with repeating well-known anti-patterns.
> In classic computer science, the parser of a context-free grammar usually has a minimal (boolean) output: it just either accepts or rejects the input.
All of the literature I remember from my uni days on formal grammers had a recogniser defined as something that accepts/rejects, and a parser as something that builds a data structure.
It's difficult to retrospectively find the literature, because outside of formal grammars recogniser _is_ used more loosely. But a few Wikipedia articles [1] [2] [3] and their referenced literature [4] [5] do agree with me.
> A recognizer is an algorithm which takes as input a string and
> either accepts or rejects it depending on whether or not the string
> is a sentence of the grammar. A parser is a recognizer which also
> outputs the set of all legal derivation trees for the string.
> Either you have a dedicated expression parser, or you run it directly throgh eval. There is no reliable middle ground.
While there is no safe middle ground, using eval directly is the worst case; it's not a case where those extremes reliable and there is greater danger in between.
That being said, rejecting everything that fails an expression parser is a form of sanitization.