Hacker Newsnew | past | comments | ask | show | jobs | submit | qingcharles's commentslogin

Same. I've coded professionally my whole life. I've never enjoyed it as much as I'm doing now and I'm the most productive I've ever been.

I'm with you. I've used Windows 11 as my primary work OS since release and it is absolutely quicker than Windows 10 and nicer to use. I do, however, debloat it and remove all the cruft when I install it.

That's weird, I looked it up earlier and found the P6 (Pentium Pro) was the first to actually make the xor optimization into a zero clock operation.

https://fanael.github.io/archives/topic-microarchitecture-ar...


A few paragraphs down from that:

“I assume that the ability to recognize that the exclusive-or zeroing idiom doesn't really depend on the previous value of a register, so that it can be dispatched immediately without waiting for the old value — thus breaking the dependency chain — met the same fate; the Pentium Pro shipped without it.

Some of the cut features were introduced in later models: segment register renaming, for example, was added back in the Pentium II. Maybe dependency-breaking zeroing XOR was added in later P6 models too? After all, it seems such a simple yet important thing, and indeed, I remember seeing people claim that's the case in some old forum posts and mailing list messages. On the other hand, some sources, such as Agner Fog's optimization manuals say that not only it was never present in any of the P6 processors, it was also missing in Pentium M.”


Amazing demos!

What does it mean for an IP range to be "known" to be a VPN? Where are web site owners supposed to get this data?

(also it only affects web sites, so gopher is still good my friends)


I am throwing out a wild guess here, but some services already block ranges of IP addresses on the basis that they have VPN servers running.

Netflix, Hulu and Amazon Prime Video all do something along these lines to protect geographically restricted licenses.

Presumably, the authors of the bills were aware that these services were already doing it and want that same behavior to be legally required for obscene material.


Serve it over FTP under the user imover18 with no password.

Don't know if that counts as age verification, but it's not a website!


It's not the first time that word appears on the bill, itspecifies that the publisher needs to "knowingly and intentionally..." do the deed.

So there is no responsibility created to actively discern (IANAL) traffic, but if you already have that info, you would be compelled to act upon it.

Again IANAL, but this could create responsibilities to avoid estoppel, for example if you filter users from VPNs for purposes like reducing spam, then you receive the benefit of identifying VPNs, and you might be compelled to bear the responsibility of it.

Another nuance is that the vpn identification might be provided by a 3rd party (see cloudflare) in which case the provider would not KNOW, but then again CF might bear the responsibility considering that they would be the distributors with knowledge of the VPN lists.

I think the law plays out very sensibly.


Jyrki is highly talented. Also the author of the incredible Jpegli, which seemed to be a reaction to Google deep-sixing JpegXL, and also Brotli, WebP lossless and WOFF2 among other things.

In some areas M is mille as in the Latin/French/Italian word for thousand, e.g.

https://en.wikipedia.org/wiki/Cost_per_mille


I thought it was an excellent article myself. Very thorough.

Thanks! It's hard writing into void any feedback is highly appreciated ;)

Sorry, Wojciech. My thoughtless internet mind took over.

Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.

I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.

HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.


This isn't a bootcamp course. I don't think Andy's audience is one trying to convert an HTML course into a career wholesale. It's for students or even industry people who want a deeper understanding of the tech.

Not everyone values that, but anyone who will say "just use an LLM instead" was never his audience to begin with.


I think it's more likely that software training as an industry is dead.

I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.


I'm afraid of what the future will look like 10+ years down the line after we've gutted humans from the workforce and replaced them with AI. Companies are going to be more faceless than they've ever been. Nobody will be accountable, you won't be able to talk to anyone with a pulse to figure out a problem (that's already hard enough). And we'll be living in a vibe coded nightmare governed by executives who were sold on the promise of a better bottom line due to nixing salaries/benefits/etc.

I don't think it will get that bleak, but it still is a good time to build human community regardless. This future only works for a broken society who can't trust their neighbor. You have the power to reverse that if you wish.

How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.

Look at his work? I had a look at the studio portfolio and it's damn solid.

If one asks you "Why do you consider Pablo Picasso's work to be outstanding", then "Look at his work?" is not a helpful answer. I've been asking about parent's way to judge the outstandingness of HTML/CSS work. Just writing "damn solid" websites isn't distinguishing.

To be frank, someone who needs to be told why to appreciate art probably isn't going to appreciate Picasso. You can learn art theory, but you can't just "learn" someone's life, culture, and expression. All the latter is needed to appreciate Picasso.

But I digress.

Anyways, I can't speak for the content itself, but I can definitely tell on the javascript coirse from the trailer and description that they understand the industry and emphasize how this is focused towards those wanting a deep dive on the heart of web, not just another "tutorial on how to use newest framework". Very few tech courses really feel like "low level" fundamentals these days.


Thank you for returning back to the original question. Being a good educator is something that can actually make someone "top tier", I agree.

On the other topic, I do not agree, as you have just proven: you explain very well why you appreciate Picasso. You thought I (or anybody) needed to be told why I/they should appreciate Picasso/OP. I don't care about that. But I'm very much interested in other peoples reasoning behind their appreciation, especially when I consider something - like HTML and CSS – to be neither very complicated, nor complex. On the other hand: that's what we love about Lumpito: simplicity. Right?


Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…

Which describes a gigantic swath of the labor market.

When 99.99% of the customers have garbage as a website, 0.01% will grow much faster and topple the incumbents, nothing changed.

Hmm. This is hand made clothes and furniture vs factory mass production.

Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.


> Nobody doubts the prior is better

Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.


> we think of the boutique hand made clothing of only the finest clothing makers

This one. Inferred from context about this individual’s high quality above LLMs.


Quality also varied over time, if I recall correctly. Machine made generally starts worse, but with refinement ends up better from superhuman specialization of machines to provide fine detail with tighter tolerances than even artisans can manage.

The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.


As we see with tech, mass production isn't an instant advantage in this market. In fact, something bespoke has a higher chance to stand out here than most other industries.

And no, I don't think people are seeking demand for AI website slop the way they do for textiles. Standing out is a good way to get your product out there compared to being yet another bloated website that takes 10 seconds to load with autoplay video generic landing text.

I'd liken it to Persona 5 in the gaming market. No one is playing a game for its UI. But a bespoke UI will make the game all the more memorable, and someone taking the time for that probably pjt care into the rest of the game as well (which you see on its gameplay, music, characters, and overall presentation).


I agree with all those points. But I also think there is a huge number of small business sites where AI CSS is good enough and sometimes might actually be better.

And that market may be a good chunk of existing contract work.


A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.

Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.


If you're telling a craftman to ignore their craft, then you're falling on deaf ears. I'm a programmer, not a businessman. If everyone took the advice of 'I don't need a good website' then many devs would be out of business.

Fact is there's just less businesses forming, so there's less demand for landing sites or anything else. I don't see this as a sign that 'good websites don't matter'


I think there's a difference between seeing yourself as a craftsman / programmer / engineer as a way to solve problems and deliver value, and seeing yourself as an HTML/CSS programmer. To me the latter is pretty risky, because technologies, tastes, and markets are constantly changing.

It's like equating being a craftsman with being someone who a very particular kind of shoe. If the market for that kind of shoe dries up, what then?


I sure hope no web dev sees tbemself only as an HTML/CSS programmer. But I also hope any web dev who sees themselves as a craftsman can profess mastery over HTML/CSS. Your fundamentals are absolutely key.

Its why I'm still constantly looking at and practicing linear algebra as an aspiring "graphics programmer". I'm no mathematician but I should be able to breath matrix operations as a graphics programmer. Someone who dismisses their role to "just optimizing GPU stacks" isn't approaching the problem as a craftsman.

And I'll just say that's also a valid approach and even an optimal one for career. But courses like that aren't tailored towards people who want to focus on "optimizing value" to companies.


Amazon has "garbage as a website" and they seem to be doing just fine.

> When 99.99% of the customers have garbage as a website

When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.

This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.


>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.

which can easily be garbage. it only has to be not garbage enough to not cause enough customers to shift enough spending elsewhere


>it might be your rating scale that is broken.

Or it could mean that most websites are trash.

>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.

Yeah, sorry. I will praise plenty of Amazon's scale, but not their deception, psychological manipulation, and engagement traps. That goes squarely in "trash website".

I put up with a lot, but the price jumpsa was finally the trigger i needed to cancel prime this year. I don't miss it.


Lots of successful companies have garbage as a website (successful in whatever sense, from Fortune 500 to neighbourhood stores).

Are they successful companies despite a bad websote, or companies successful because they knew where to stop cutting corners that lead to success?

I suspect it's the former.


Struggling because they're deliberately shooting themselves in the foot by not taking on the work their clients want them to take. If you don't listen to the market, eventually the market will let you fall by the way side.

How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.

[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested


> How many models are only trained on legal[0] data?

None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is

1. Common crawl

2. Github

3. Wikipedia, Wikibooks

4. Reddit (pre-2023)

5. Semantic Scholar

6. Project Gutenberg

* https://arxiv.org/pdf/2402.00159


Nice, I hadn't heard of this. For convenience, here are HuggingFace models trained on Olma:

https://huggingface.co/datasets/allenai/dolma

https://huggingface.co/models?dataset=dataset:allenai/dolma


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: