What you're arguing about is the field completely changing over 3 years; it's nothing, as a time for everyone to change their minds.
LLMs were not productified in a meaningful way before ChatGPT in 2022 (companies had sufficiently strong LLMs, but RLHF didn't exist to make them "PR-safe"). Then we basically just had to wait for LLM companies to copy Perplexity and add search engines everywhere (RAG already existed, but I guess it was not realistic to RAG the whole internet), and they became useful enough to replace StackOverflow.
Absolutely, and I try to keep that in mind. I tried to explicitly indicate that it's not a homogeneous group, not in terms of longevity, ethnic background, common ground, age, location, or frequency, although they are mostly the same sex among my closest friends. I try to just start small, open myself up to new experiences and people, and then identify when we have good chemistry, which takes a bit of personal honesty. For every good friend, I have probably 3 acquaintances that I also see regularly in group settings but not super close, and some of those that might have become close friends but dropped off for whatever reason.
They're people that I've known typically for at least 2 years, I trust, and I'd be able to DM for a hangout, drink with, engage in a variety of common interests together, we can roast each other and be toxic and give each other a pass as long as it's all in good humor.
The skills and traits that make me reasonably good at that don't help me in the kinds of activities that make people boring and successful in a nostalgic middle class suburbia sense, so it has tradeoffs, and it's not always easy, but it averages out to easy. I used to be very shy, and probably wouldn't be this way if I'd stayed in my hometown where everyone just stays away from each other and knows their high school mates. I live in a relatively expensive city, don't have a car or a house, or kids, or many physical assets. I'm employed, but probably have not been for more than half of my adult life, and I rent.
Spreading myself too thin beyond that isn't really sustainable until there's a good foundation of trust built and it's not as necessary to see each other every week. Some older friends I see once a year, others I've seen 4 times this week due to holiday parties, and some are people I know in my community. Events and lots of socializing come in bursts and it can be tiring, so I'll just dip out for a while and refresh when necessary, which I'm currently looking forward to, but have a wedding and a birthday coming up in the next 3 days lol
Incidentally, I've also never once known my immediate neighbors beyond a brief few conversations.
it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened.
if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism
Perhaps interestingly, many/most digital cameras are sensitive to IR and can record, for example, the LEDs of an infrared TV remote.
But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs.
When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake?
Silicon sensors (which is what you'll get in all visible-light cameras as far as I know) are all very sensitive to near-IR. Their peak sensitivity is around 900nm. The difference between cameras that can see or not see IR is the quality of their anti-IR filter.
Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones.
Here's a random spectral sensitivity for a silicon sensor:
But the camera is trying to emulate how it would look if your eyes were seeing it. In order for it to be 'genuine' you would need not only the camera to genuine, but also the OS, the video driver, the viewing app, the display and the image format/compression. They all do things to the image that are not genuine.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
People on HN are not regular users in any way, shape or form.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
The advantages of having a single big memory per gpu are not as big in a data center where you can just shard things between machines and use the very fast interconnect, saturating the much faster compute cores of a non Apple GPU from Nvidia or AMD
AI is threatening to remove humans from the whole equation except the very top. AI is an existential threat (not in the Terminator sense).
Especially for art, I'm an AI researcher myself (in bio for health), but I think that ppl are completely understandable for wanting to help artists make a living and want to consume something that someone cared about
This interpretation would have been ok for old generation models without search tools enabled and without reliable tool use and reasoning. Modern LLMs can actually look up the existence of papers with web search, and with reasoning, one can definitely get reasonable results by requiring the model to double check that everything actually exists.
reply