> Seeing comments here saying “this problem is already solved”, “he is just bad at this” etc. feels bad. He has given a long time to this problem by now. He is trying to solve this to advance the field. And needless to say, he is a legend in computer engineering or w/e you call it.
This comment, with the exception of the random claim of "he is just bad at this", reads like a thinly veiled appeal to authority. I mean, you're complaining about people pointing out prior work, reviewing the approach, and benchmarking the output.
I'm not sure you are aware, but those items (bibliographical review, problem statement, proposal, comparison/benchmarks) are the very basic structure of an academic paper, which each and every single academic paper on any technical subject are required to present in order to be publishable.
I get that there must be a positive feedback element to it, but pay attention to your own claim: "He is trying to solve this to advance the field." How can you tell whether this really advances the field if you want to shield it from any review or comparison? Otherwise what's the point? To go on and claim that ${RANDOM_CELEB} parachuted into a field and succeeded at first try where all so-called researchers and experts failed?
Lastly, "he is just bad at this". You know who is bad at research topics? Researchers specialized on said topic. Their job is to literally figure out something they don't know. Why do you think someone who just started is any different?
And indeed, most "nations" have their own homeland. That's exactly what happened in the 19th and 20th century - a new nationalism was taking hold in the world, and many nations created a national homeland - hence the creation of so many countries in the 20th century.
The Zionist movement started because early Jewish leaders saw this phenomenon gaining traction, and understood that as these national identities were created and states started being created for them, many wouldn't consider Jews part of their "nationality", therefore Jews also needed a national homeland. This was, in retrospect, the exactly correct analysis, given the pogroms that happened in the 19th century and given the Holocaust.
Despite so many people claiming otherwise, there's not much different about Israel than many other European nations.
Been a freelance dev for years, now beginning a "sabbatical" (love that word).
Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.
That seems to fly over a lot of heads.
Anyone who actually meditated will tell you the process of fixing yourself through meditation is painstakingly slow, you mostly become aware of how your mind does not do what it is supposed to, and if you stop meditating you quickly lose all progress.
What the post describes is essentially some form of micro journaling to build a cached hashmap of the thought patterns you want your mind to have.
Things would actually change—developers would instead choose to distribute via the alternate means instead of the App Store.
So, you see, it doesn’t matter whether Apple has the walled garden or the third-party devs have the walled garden. Either way, users will be forced to accept someone’s distribution policy. But the difference really lies in the trust on Apple and its security and privacy practices, which is a choice that will be robbed from people buying iPhones to use apps exactly for this purpose.
Not sure why you’re being downvoted. I also don’t like oversized motor vehicles but I think the parable is sound;
If the effort of switching out when you need the last 1% is higher than whatever premium you will pay (compilation time/fuel cost) - especially as a small ongoing cost, people will likely choose it.
I’m not saying this as if its wisdom into the future, only in that we can observe it today with at least a handful of examples.
My configured primary language is English, but I regularly watch contents in Chinese and Japanese, where I have sufficient mastery over to not need YouTube's subpar translation. YouTube's insistence in displaying video titles in English, starting a few months ago, and now also auto-dubbing in English, is incredibly annoying.
What's up with absolutely tiny amounts of tests per day? 200 tests per day is tiny. My FE team only has two devs and the product is fairly small, but we still managed to amass a couple of hundreds of E2E tests in several years. With tests run in CI on every PR or commit, this won't scale at all.
Presumably, this is backed by some sort of LLM to browser MCP integration. If true, how do you ensure tests don't randomly fail because of inherit unpredictability?
I'm far from an expert but I would say there will always be people who hate the current enshitified solutions and are looking for the new cool kid in town
That's the point, isn't it? The missing link. AIs can't yet truly comprehend, or internalize, or whatever you want to call it. That's probably equivalent to AGI or singularity. We're not there yet. Feeding copious amounts of data into existing architecture won't get us there either.
A human with all that data, if it could fit in their brain, would likely come up with something interesting. Even then... I'm not entirely sure it's so simple. I'd wager most of us have enough knowledge in our brains today to come up with something if we applied ourselves, but ideas don't spontaneously appear just because the knowledge is there.
What if we take our AI models and force them to continuously try making connections between unlikely things? The novel stuff is likely in the parts that don't already have strong connections because research is lacking but could. But how would it evaluate what's interesting?
When there is no more profit and only loss, I reckon. Shareholders would not be happy if they pulled out of a region because they can only make $1 over $10. If that $1 is profit, they'll want it.
Any argument in this vein must also apply to the Mac. The hardware is mostly the same. The OS is mostly the same. The native apps and services are identical, as is the security story. So why should the Mac not be locked down if the iPhone is? Put another way, what prevents Apple from using the exact same reasoning to lock down my Mac in the future, perhaps under pressure from authoritarian regimes? The technology (notarization) is already in place and actively being abused for iOS app review in the EU.
> The lesson I remember was that conflict in the Cold War was not zero-sum. One side
would win and one side would lose. There were (in this game) no win-win outcomes.
But -and this is the key point - the value of each win or loss was unequally felt. For the
US to back down in Indonesia was disappointing. To back down in West Germany was
fatal.
Maybe I'm misunderstanding, but it's not clear to me how this describes something obviously non-zero sum. Independent losses can have different values in a zero-sum hand game; what matters is whether each win is proportional to the corresponding loss. If the USSR winning in West Germany was only a small win, that would demonstrate it was non-zero sum due to the size of the loss there for the US, but I don't think the magnitude of the outcome in Indonesia would relate to that at all.
Youtube translations is such a dumb feature. I watch in german and english and have my language set to english. The english translations for german titles are most of the time garbage, because they translate names and fixed expressions we keep in english all to german. The result is just utter garbage - an complegtely unwanted. Especially since the underlying google account does support multiple languages, and I have set both languages that I speak there.
Specialist skills. AI is trained on data. Data pulls it towards the average. It does a good job, but only good. If great work is valuable, there's still a need for specialists.
Like right now, native mobile jobs are mostly unaffected by AI. Gemini, despite all the data in the community, doesn't do a decent job at it. If you ask it to build an app from scratch, the architecture will be off. It'll use an outdated tech stack from 2022. It will 'correct' perfectly good data to an older form, and if you ask it to hunt for bugs in cutting edge tech, it might rip out the new one and replace it with old stuff. It often confuses methods that are common between different languages like .contains().
But if very high quality data is easily accessible, e.g. writing, digital art, voice acting, etc, that makes it viable to be cloned by AI. There's little animation data and there's even less oil painting data - so something like oil painting will be greatly more resistant than digital art. It's top tier on Python and yet it struggles with Ren'Py.
This is a fairly simple task for a human, and Claudius has plenty of reasoning and financial data. But it can't reason its way into running a vending machine because it doesn't have data on how to run vending machines.
> No (real) customer has ever, or will ever, care about this. Discord and Slack are pretty much case-in-points:
This is just flat out false. Even my girlfriend - the least tech interest person I know - complained to me how its possible that a damn chat app (teams) is bad enough to make her entire computer feel slow.
So yeah, average users maybe don‘t hate Electron or React, bad many people hat the bad user experiences these solutions often entail.
Sure, but “JavaScript [is] utterly broken, incapable of executing the simplest programs without errors” is a bit much. I find it hard to believe that even when I’m completely out of touch, I’d say that about a language that people are obviously productive in (as much as I hate JS myself).