Macs fundamentally can't be personal computers since they're entirely controlled by apple. Any computer running nonfree software can't be a personal one
So? You can replace the ROM chip (or flash it, if it's an EEPROM). The whole point of free software is that you don't have to limit yourself to what the manufacturer says you can do.
I think the core of the issue is that Mozilla is thinking big. They're not happy to service a niche well (which the majority of the comments on Mozilla related posts is generally asking them to), they want to get back to their glory days, capture the mainstream.
And that is tough. Chrome won because it was an, at the time, superior product, AND because it had an insane marketing push. I remember how it was just everywhere. Every other installer pushed Chrome on you, as well as all the Google properties, it was all over the (tech) news, shaping new standards aggressively etc. Not something Mozilla can match.
But they just won't give up. I don't know if I should applaud that or not, but I think it's probably the core of the disconnect between Mozilla and the tech community. They desperately want to break into the mainstream again, their most vocal supporters want them to get a reality check on their ambitions.
If I was running Mozilla, I'd probably go for the niche. It's less glamorous, but servicing a niche is relatively easy, all you have to do is listen to users and focus on stuff they want and/or need. You generally get enough loyalty to be able to move a bit behind the curve, see what works for others first, then do your own version once it's clear how it'll be valuable to the user base. I'd give this strategy the highest chance of long term survival and impact.
Mainstream is way tougher. You kinda need to make all kinds of people with different needs happy enough, and get ahead of where those wants and needs are going.
One could argue they could do both: Serve a niche well with Firefox and try to reach the mainstream with other products. I think to some degree they've tried it, with mixed results.
The mainstream didn't get mainstream by striving to go with mainstream. They got there by serving a niche well and then expanding the niche. Trying to go mainstream without having a niche moat will make you lag behind the establishment endlessly.
I'm not an Apple fan (rather an Apple hater if you would), but they are a perfect example of this. First, have a top quality niche product, then go into the big waters with the vision you got from the niche - and then people will actually be willing to give up bells and whistles the product that is good enough.
Mozilla have a well-established niche with a vision, but they can't monetize without giving up the vision they have (and apparently consider opening for small direct donations or maybe even direct bug/feature crowdsourcing not worth it). So they keep jumping on every sidetrack. And keep losing even the niche they have.
Quite a fascinating adventure, even if it's not continuous.
Good teaching moment for why estimates of big endeavours tend to be off, too. He appears to have slightly overestimated his average walking speed and greatly underestimated breaks (only some of which were by choice from what I gather).
The total journey appears to be 58,000 km (36,000 miles).
Expectation: 8 years, which translates to a daily average of almost 20 km (~12.5 miles). That's about 4-6 hours of walking time at my speed. Every. Single. Day. In sickness or in health, on country roads or through frozen wastelands. Seems optimistic even without anticipating any delays?
Reality: After 8 years, he had actually finished about half the distance, which I already find impressive. As of October, he has 2,213 km (1,375 miles) left. That means he traveled 55,787 km (34,664 miles) in around 27 years. That puts him at a daily average of almost 6 km (~3.7 miles), so probably 1-2 hours of daily walking time. That's actually not bad considering all the delays, but quite a bit less than anticipated.
New estimate: He expects to be home "by 2026", let's say January. Based on that premise, his new estimate is that he will walk 2,213 km in ~4 months. That's a bit more than 17 km (~10.5 miles) per day. Relatively close to his original, comparatively uninformed estimate, funnily enough.
All that said, I don't think I'd have the willpower to see this through, especially considering all the setbacks. Mighty impressive.
In my experience, "font" is the colloquial term referring to either. Programmers get to demand precision, for journalists it's a bit tougher. The de facto meaning of terms does, unfortunately, evolve in sometimes arbitrary ways. And it's tough to fight.
That would be considered a derivative work of the C code, therefore copyright protected, I believe.
Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If you're prodding an LLM to give you a variety of resu
But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge.
The most interesting question, to me, is who cares? If we reach a point where highly valuable software is largely vibe coded, what do I get out of a lack of copyright protection? I could likely write down the behaviour of the system and generate a fairly similar one. And how would I even be able to tell, without insider knowledge, what percentage of a code base is generated?
There are some interesting abuses of copyright law that would become more vulnerable. I was once involved in a case where the court decided that hiding a website's "disable your ad blocker or leave" popup was actually a case of "circumventing effective copyright protection". In this day and age, they might have had to produce proof that it was, indeed, copyright protected.
"Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not." Yes and no. It's possible in theory, but in practice it requires control over the seed, which you typically don't have in the AI coding tools. At least if you're using local models, you can control the seed and have it be deterministic.
That said, you don't necessarily always have 100% deterministic build when compiling code either.
That would be interesting. I don't believe getting 100% the same bytes every time a derivative work is created in the same way is legally relevant. Take filters applied to copyright protected photos - might not be the exact same bytes every time you run it, but it looks the same, it's clearly a derivative work.
So in my understanding (not as a lawyer, but someone who's had to deal with legal issues around software a lot), if you _save_ all the inputs that will lead to the LLM creating pretty much the same system with the same behaviour, you could probably argue that it's a derivative work of your input (which is creative work done by a human), and therefore copyright protected.
If you don't keep your input, it's harder to argue because you can't prove your authorship.
It probably comes down to the details. Is your prompt "make me some kind of blog", that's probably too trivial and unspecific to benefit from copyright protection. If you specify requirements to the degree where they resemble code in natural language (minus boilerplate), different story, I think.
(I meant to include more concrete logic in my post above, but it appears I'm not too good with the edit function, I garbled it :P)
Google, Meta and Microsoft would have to compete on demand, i.e. users of the chat product. Not saying they won't manage, but I don't think the competition is about ad tech infrastructure as much as it is about eyeballs.
It might take Microsoft's Bing share, but Google and Meta pioneered the application of slot machine variable-reward mechanics to Facebook, Instagram and Youtube, so it would take a lot more than competing on demand to challenge them.
Agreed. If you "create demand", it usually just means people are spending on the thing you provide, and consequently less on something else. Ultimately it goes back to a few basic needs, something like Maslow's hierarchy of needs.
And then there's followup needs, such as "if I need to get somewhere to have a social life, I have a need for transportation following from that". A long chain of such follow-up needs gives us agile consultants and what not, but one can usually follow it back to the source need by following the money.
Startup folks like to highlight how they "create value", they added something to the world that wasn't there before and they get to collect the cash for it.
But assuming that population growth will eventually stagnate, I find it hard to not ultimately see it all as a zero sum game. Limited people with limited time and money, that's limited demand. What companies ultimately do, is fight for each other for that. And when the winners emerge and the dust settles, supply can go down to meet the demand.
It's not a zero sum game. Think, an agronomist visits a farm, instructs to cut a certain plant for the animals to eat at a certain height instead of whenever, the plant then provides more food for the animals to eat exclusively due to that, no other input in the system, now the animals are cheaper to feed, so more profit to the farmer and cheaper food to people.
It would be if demand was limited. Let's assume the people already have enough food, and the population is not growing - that was my premise. Through innovation, one farmer can grow more than all the others.
Since there already was enough food, the market is saturated, so it would effectively reduce the price of all food. This would change the ratio so that the farmer who grows more gets more money in total, and every other farmer gets a bit less.
As long as there is any sort of growth involved - more people, more appetite, whatever, it would be value creation. But without growth, it's not.
At least not in the economical sense. Saving resources and effort that goes into producing things is great for society, on paper. But with the economic system that got us this far, we have no real mechanism for distributing the gains. So we get over supplying producers fighting over limited demand.
The world is several orders of magnitude more complex than that example, of course. But that's the basic idea.
That said, I'm not exactly an economist, and considering it's a bleak opinion to hold, I'd like to learn something based on which I could change it.
Late comment but if technology brought down the price of food then people could spend less on food, more on other good and services. Or the same on higher quality food. You don't need an increasing population for that. The improvement in agriculture could mean some farmers would have to find other work. So you can have economic growth with a stagnant or falling population. And you can rather easily have economic growth on a per-capita basis with no overall GDP growth, like is common in Japan today.
About the farmer needing to change jobs, in the interview that is the subject of this thread Ilya Sutskever speaks with wonder about humans' ability to generalize their intelligence across different domains with very little training. Cheaper food prices could mean people eat out or order-in more and then some ex-farmers might enter restaurant or food preparation businesses. People would still be getting wealthier, even without the tailwind of a growing population.
What's wrong with Minecraft? I'm not exactly a Microsoft fan, but am pretty impressed with how little control they have so far exerted over Java Edition, they even made modding easier recently by removing obfuscation. You can run your own server as much as you want with no fees, obligations or anything. And unless the kids know a server address, they can't easily join some third party server with weird stuff going on. Not that I ever heard of one of those, but I'm sure they must exist.
As others have said there's a big difference between the Minecraft "we" (the tech community growing up on Beta versions of Minecraft) know and the Minecraft of today.
The subsequent versions also developed the game mechanics a lot to turn it into something closer to an RPG than the early, bare-bones sandbox game with minimal, well-understood mechanics and the rest purely up to the players' creativity.
There's nowadays an abundance of Pay-to-Win servers with custom mechanics to enable that, and I'm sure a lot of unsavory people preying on children. The social media (YouTube/etc) community around it has exploded too in a way I don't recall it before (I used to be into Minecraft videos back in the ~2012 era, and what I see nowadays grosses me out in comparison).
Java Minecraft (the old version, moddable, self-hostable) and Bedrock Minecraft (windows/console only, no self-hosted servers) are quite different.
The sad thing is that Bedrock is super simple. You can get it on any tablet, pay like 5-6€ a month to Microsoft for a "realm" and you can play with 3-4 friends online. They can join even if they don't pay, any mobile device, console or windows PC will work.
With Java you need someone to host the server or pay for hosting, then you need to give the address to your friends and have them connect. Then you need to worry about whitelists, because there are multiple services scanning for open Minecraft servers and people WILL come in and fuck your world up (happened to me).
It really does, by any definition I've ever heard. I suppose the authoritative one would be [1].
A common "trick" for commercial open source software is to use a copyleft license, which restricts redistribution as part of commercial products, and to offer a paid license to get around that.
> Many people believe that the spirit of the GNU Project is that you should not charge money for distributing copies of software, or that you should charge as little as possible—just enough to cover the cost. This is a misunderstanding.
> Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license.
Fascinating, from skimming that, it does indeed appear that it would be within the GNU philosophy to distribute source code solely in exchange for payment. Doesn't cover a case where the source code is _already_ distributed though, then it's free to run.
And even if the source code was only distributed to paying customers, that'd likely be a temporary situation. A relevant quote:
"With free software, users don't have to pay the distribution fee in order to use the software. They can copy the program from a friend who has a copy, or with the help of a friend who has network access."
I do read the GPLv3 such that if someone _does_ buy the code in any fashion, you must provide the source code to them for free. Relevant excerpt from section 6:
"[...] give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge."
But yeah, no obligation to provide the source code for free to non-customers, fair point. Just no ability to stop customers from sharing it with non-customers. Does make sense.
> > Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license.
This is interesting. If it had a limitation on reselling or a non-commercial / non-compete clause, it'd be almost perfect.
Today lots of companies come in and take open source software and "steal" the profits. (You could argue that theft is invalid since the license allows for this.) This makes it hard for the authors to build a durable business. Certainly difficult to build into a large-scale company.
Open source needs a better mechanism for authors to make money with what they create while still enabling user freedom to do what they want with the software - modify, reuse, publish changes, etc.
"Open core" is one strategy, but it feels like stepping around limitations in the license. Just spelling out "we want to make money in a defensible way" and giving user freedoms seems like a step in the right direction. More companies would probably opt to share their code if this happened.
Nothing in that "authoritative" definition says you cannot charge for binaries, for example. It's talking mainly about source code itself. Something you just publish the source for but charge for anything else, would be fair game and still "open source" by that definition.
I was responding to parent's question though: "Can you call it open source if you need a subscription license to run / edit the code?"
I'd say no. If you have the code in front of you, it shouldn't require a license to run. Even if the whole point of the open source software is to interact with a proprietary piece of software or service, you could still run it for free, it probably just wouldn't have much utility.
reply