I don't think the medium of short form video is redeemable. Its occasional positives are the delivery vehicle for the many negatives, the sugar coating around the poison. It's rotten by just about any measure: properties of the medium, what types of business it attracts, messages that thrive, how well it reflects reality, aggregate effects on people across a variety of outcomes, the aftertaste of using it, etc.
I think the best question to assess it is does this make us better people or not, and to what degree? From what I have seen, the answer is it seems to be pretty significantly de-skilling us in attention, agency, nuance, psychological wellbeing, etc. It makes us more vulnerable to influence and manipulation. The businesses that benefit the most from deploying it are advertising-based, which naturally leads to surveillance and algorithms and pace of consumption that maximizes addiction. The messages that perform best are emotional and attention seeking. There is no information quality control. The consumption pattern it suggests leaves no room for thinking, processing emotion, or nuance.
The personal crusade I'm on is to build a competing product at the quality level of TikTok/Insta that diverts interest & attention toward books, which as a medium is both a lot more of a known quantity and whose consumption naturally results in longer attention spans, greater literacy, and all the nth order consequences of written culture. It's great that things like BookTok exist but ultimately that energy & activity needs to find its way over to a healthier home.
Normally the 1 sentence per para LinkedIn post for dummies writing style bugs me to no end, but for a technical article that's continually hopping between questions, results, code, and explanations, it fits really well and was a very easy article to skim and understand.
It's action thriller writing for something that's in reality is super dull (my question is loaded with outdated cliches, but would you be telling a girl you're trying to impress at a party about this problem you faced of trying to push some data over the network?). I had to skim over it, like watching a YouTube video at 2x so I don't start evaluating how obnoxious the narrator is.
I'm not so sure; that may not have been what you meant, but that doesn't mean it's not what others read into it. The broader context is HN is a startup forum and one of the most common discussion patterns is 'I don't like it' that is often a stand-in for 'I don't think it's viable as-is'. Startups are default dead, after all.
With that context, if someone were to read your comment and be asked 'does this person think the product's model is viable in the long run' I think a lot of people would respond 'no'.
And this is a perfect example of how some people respond to LLMs, bending over backwards to justify the output like we are some kids around a Ouija board.
The LLM isn't misinterpreting the text, it's just representing people who misinterpreted the text isn't the defense you seem to think it is.
And your response here is a perfect example of confidently jumping to conclusions on what someone's intent is... which is exactly what you're saying the LLM did to you.
I scoped my comment specifically around what a reasonable human answer would be if one were asked the particular question it was asked with the available information it had. That's all.
Btw I agree with your comment that it hallucinated/assumed your intent! Sorry I did not specify that. This was a bit of a 'play stupid games win stupid prizes' prompt by the OP. If one asks an imprecise question one should not expect a precise answer. The negative externality here is reader's takeaways are based on false precision. So is it the fault of the question asker, the readers, the tool, or some mix? The tool is the easiest to change, so probably deserves the most blame.
I think we'd both agree LLMs are notoriously overly-helpful and provide low confidence responses to things they should just not comment on. That to me is the underlying issue - at the very least they should respond like humans do not only in content but in confidence. It should have said it wasn't confident about its response to your post, and OP should have thus thrown its response out.
Rarely do we have perfect info, in regular communications we're always making assumptions which affect our confidence in our answers. The question is what's the confidence threshold we should use? This is the question to ask before the question of 'is it actually right?', which is also an important question to ask, but one I think they're a lot better at than the former.
Fwiw you can tell most LLMs to update its memory to always give you a confidence score 0.0-1.0. This helps tremendously, it's pretty darn accurate, it's something you can program thresholds around, and I think it should be built in to every LLM response.
The way I see it, LLMs have lots and lots of negative externalities that we shouldn't bring into this world (I'm particularly sensitive to the effects on creative industries), and I detest how they're being used so haphazardly, but they do have some uses we also shouldn't discount and figure out how to improve on. The question is where are we today in that process?
The framework I use to think about how LLMs are evolving is that of transitioning mediums. Like movies started as a copy/paste of stage plays before they settled into the medium and understand how to work along the grain of its strengths & weaknesses to create new conventions. Speech & text are now transitioning into LLMs. What is the grain we need to go along?
My best answer is the convention LLMs need to settle into is explicit confidence, and each question asked of them should first be a question of what the acceptable confidence threshold is for such a question. I think every question and domain will have different answers for that, and we should debate and discuss that alongside any particular answer.
One of the few use cases for LLMs that I have high hopes for and feel is still under appreciated is grading qualitative things. LLMs are the first tech (afaik) that can do top-down analysis of phenomena in a manner similar to humans, which means a lot of important human use cases that are judgement-oriented can become more standardized, faster, and more readily available.
For instance, one of the unfortunate aspects of social media that has become so unsustainable and destructive to modern society is how it exposes us to so many more people and hot takes than we have ability to adequately judge. We're overwhelmed. This has led to conversation being dominated by really shitty takes and really shitty people, who rarely if ever suffer reputational consequence.
If we build our mediums of discourse with more reputational awareness using approaches like this, we can better explore the frontier of sustainable positive-sum conversation at scale.
Implementation-wise, the key question is how do we grade the grader and ensure it is predictable and accurate?
I grew up a few blocks from his funky Santa Monica house [1], passed by it all the time. When you’re a kid you typically see wild new things like that as just normal because you have no context for how unusual they are. His house defied that perspective; even as a kid you understand that being wrapped in oddly angled chain link fences and corrugated metal is just... different. It's an unanswered question, a loose thread, a thing you can't unknow.
I don't particularly like the house - it's meant to be challenging not beautiful - but with perspective I see now there aren't many creations out there that achieve existence in eternal confusion like it does for me. I see his other works like Bilbao [2] and Disney Hall as refinements on the concept with the added dimension of beauty. They're not quite as memorable, but I think do a great job exploring the frontier of beauty and befuddlement.
The Santa Monica spot was, personally, a bit of an eye-sore after about 8 years. I kept wishing someone else would rise to the flamboyance, but nobody ever really did. Well, I'm wrong of course, but I never did see such a striking spot until I got to Europe, or whatever ..
I'm a Hacker News reader and relate with the community. At the same time, it's not like I've same interest and energy in every niche. For example, I might have interest in custom building my keyboards, but may not be in restoring an old router. It's not like HN users exclusively use Linux desktops and many of us prefer simplicity.
The point I'm trying make is that there is more nuance than a simple HN user stereotype.
Very much not true. I'm a seasoned network engineer, and very comfortable deep in the Linux network stack. I still have no interest in doing that for 7-10 hours during work and then having to do it for another 1-3 hours after work to get things working properly. I use UniFi products because they're dead simple and work well.
I would challenge you to try supporting your family using your home network doing esoteric things, the juice is not worth the squeeze. I can give my wife the UniFi login and she can figure things out well enough on her own, and it lets us easily integrated networked devices that don't serve network connectivity (e.g. IP cameras) into our day-to-day as well.
Do I think UniFi is bar none the best gear? Absolutely not. Do I think it's a "good deal"? Maybe. Do I think it's better than the alternative uses of my time, that I'd rather spend doing other stuff? Abso-fucking-lutely, which is why I have been buying it for years.
Discoverability is quite literally the textbook problem with CLIs, in that many textbooks on UI & human factors research over the last 50 years discuss the problem.
With 1B+ users Apple isn't in the position to do the typical startup fast & loose order of operations. Apple has (rightly) given themselves the responsibility to protect people's privacy, and a lot of people rely on that. It'd be a really bad look if it turned out they made Siri really really useful but then hostile govt's all got access to the data and cracked down on a bunch of vulnerable people.
I think the best question to assess it is does this make us better people or not, and to what degree? From what I have seen, the answer is it seems to be pretty significantly de-skilling us in attention, agency, nuance, psychological wellbeing, etc. It makes us more vulnerable to influence and manipulation. The businesses that benefit the most from deploying it are advertising-based, which naturally leads to surveillance and algorithms and pace of consumption that maximizes addiction. The messages that perform best are emotional and attention seeking. There is no information quality control. The consumption pattern it suggests leaves no room for thinking, processing emotion, or nuance.
The personal crusade I'm on is to build a competing product at the quality level of TikTok/Insta that diverts interest & attention toward books, which as a medium is both a lot more of a known quantity and whose consumption naturally results in longer attention spans, greater literacy, and all the nth order consequences of written culture. It's great that things like BookTok exist but ultimately that energy & activity needs to find its way over to a healthier home.
reply