Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inside The Decline of Stack Exchange (thediff.co)
144 points by gadders on Aug 14, 2023 | hide | past | favorite | 166 comments


I have never understood why people have so much contempt for SO/SE network of websites. To me this network is on par with Wikipedia in terms of contribution to mankind. This is the internet itself. Granted I have only ever asked a handful of questions there, so all I have is anecdotes to share, but the community was very welcoming, and straight to the point. Maybe it has changed now? I haven't asked any questions since a couple of years. A few anecdotes -- I am not a native English speaker and english lang stackexchange was a godsend for me because sometimes the kind of questions I had, my teacher could not satisfy me and I don't even know how I could ever explain the problem to a machine (LLMs). I am not a biology student, but at time I was just curious about the functioning of human eye, so I tried my shot at biology SE and had a wonderful discussion. I have asked a few philosophical questions on Physics SE as well with similar experience.

In my programming job I consult SO multiple times, and I have never once felt the kind of hostile discussion environment that people try to potray. SO is very valuable to me especially for questions that have less to do with documentation but more about the essence of programming itself. It disgusts me that in future I will have to communicate with a brainless LLM instead where I have no recourse if its answer doesn't satisfy me, nor I have the confidence in the correctness of its word vomit.


It's a hard balance. New users are probably already at the end of their rope when they decide they need to actually ask a question. They often aren't aware that their question was already asked because they don't have the words or experience to sufficiently describe their problem. Then someone points that out which they need to do if there's going to be any order or cleanliness on SO. I think the original askers don't receive that very well because it's often pointed out by some automated looking message like "your question is a duplicate and is closed".

Everybody is trying to do their best but the new askers don't have the experience to not ask their question and the experienced SO users have seen that question asked a dozen times already.


I mostly agree, but also, the moderation has gone from great to pedantic, and further to poor enforcement of nonsensical rules.

I have had questions closed as duplicate, where the "original" asked something else. Questions about choice of framework apparently are off topic on SE stack exchange -> closed. Weirdo moderators / keen users butting in on active, sane questions with "I'm voting to close because [random nonsensical rule]".

I feel bad writing it as I suspect most moderators on SO are great, but this is nonetheless my overwhelming experience of past 12 months.

EDIT I forget about the patronizing users who didn't read the question, but don't let that stop them from lecturing you. On SO rules, life mistakes...


I had a guy vote to close one of my 5-year-old questions that had an answer and plenty of upvotes at that point. It’s clearly helping people out but yeah let’s delete it because you want a badge (I’m not sure if you get a badge for that)


Who shows contempt for SO? I don't see much of that, personally. Whenever it goes down, I see a lot of jokes about how nobody can do their job. The linked article agrees with you that it's "one of the greatest compendia of human knowledge ever produced".


Almost every time I had a coding-type question for SO, I would diligently search through previous SO answers and 98% of the time I could find the answer to my question by doing so - but it would always take a long time. LLMs are just a lot faster, and while double-checking their output is necessary, that's a much quicker process than trying to hunt down answers starting from little knowledge.

There's also the issue of people trying to improve their SO score because that can impact things like job interviews, at least on the programming side, and as with any such system gaming it for high reputation (points for questions, answers, upvotes) has some side effects on people's behavior that aren't always pleasant. There's also a saturation effect, e.g. many common questions have already been asked, and the 'archive of truth' mentality means that you're supposed to search past answers before asking anything, which again tends to take a lot longer than ChatGPT does.


I've used SO to look up problems a lot but I've never gotten a good answer out of it myself. Either it's marked as a duplicate of a similar but different question or I don't get any replies and a few downvotes. To me it's an entirely read only site partially because of it's relatively high barrier to commenting on solutions.


That’s the beauty of using ChatGPT for programming questions. You don’t have to have confidence in the answer. You can easily verify correctness by running the code it outputs and testing it.

That being said, I realize you do have to be careful about subtle bugs.

My canonical example is “write a Python script that returns all of the AWS IAM roles that contain a given list of policies”.

The code it usually generates works correctly as long as you don’t have more than 50 roles in your account. It won’t add pagination support unless you spot the bug and tell it.

The second question I have is what type of questions do you find are better suited for Stack Overflow than ChatGPT? Are they questions based on post 2021 knowledge?


> You can easily verify correctness by running the code it outputs and testing it.

This is not always as easy for all types of questions. It's hard to come up with an example on the spot, but I tried a few queries I searched on SO in the last week, and found one to demonstrate. I was searching for "Where to store JWT in browser?". This is the SO answer [1] for reference. Now to prove my point, I ask the question to ChatGPT (3.5). Here's the chat [2].

On a first glance it looks like ChatGPT may have nailed it, though the information dump is HUGE. Among the 6 options it suggests, the last option "secure cookies" looks "correct". Indeed it is correct in what it says that this can prevent XSS attack. But it is not complete. Because it still does not prevent XSRF attack entirely. So I had to explicitly prompt it to think about XSRF and its response is weird. At first it incorrectly claims that XSRF attack is mitigated, but then in the response body it elaborates that we also need anti-XSRF tokens for complete protection. So I don't know what to make of it. Contrast this with the SO answer which is way more direct.

Honestly, ChatGPT's answer looks like that of a student who is trying to impress some examiner with their knowledge dump by beating around the bush rather than trying to precisely answer the actual question.

[1]: https://stackoverflow.com/questions/27067251/where-to-store-... [2]: https://chat.openai.com/share/c26fee93-5d3d-48e2-a820-297974...


In general, when ppl say ChatGPT on hn, do they mean 3.5 (free version) or 4.0 (paid version)?

Which is it you are referring to here?


You're right. But I think it can really vary based on which community you ask in.

I think most people dislike how hostile the main stack overflow site is to beginners, who despite being the ones who need the most help, are the most likely to get their question closed/ downvoted. It's unfortunately necessary to maintain the high quality of the site.


> I think most people dislike how hostile the main stack overflow site is to beginners, who despite being the ones who need the most help, are the most likely to get their question closed/ downvoted. It's unfortunately necessary to maintain the high quality of the site.

It's not just hostile to beginners. It's also quite hostile to experienced programmers (at least those subject to less than ideal design constraints), who are incorrectly treated like newbies and force-fed cookbook answers that fail to answer the actual question asked.

For example. Here's a paraphrase of some experiences I've had on Stack Overflow (obfuscated to preserve my anonymity):

> Me: I need to know about the implementation of this weird function in this weird proprietary language. I need to re-implement its output, which was (unwisely) directly exposed in our system's output and is depended on by our integrations. I know this sucks, but it's the problem I have to solve.

> StackOverflow: That's a bad design. You should use the XYZ function in Python instead.

> Me: You're not answering my question. I know it's a bad design, but I can't change it. The output is fixed, and I have no authority to force a change. If I can't re-implement the function, I'm suck using the weird language forever.

> StackOverflow: You should use XYZ function in Python. If you can't, talk to your architect and get budget allocated to rewrite all the downstream systems.

> Me: You're no help at all. Why did you even bother answering?

> Me (thinking to myself): Wait, I know: chasing after internet points and trying to feel high and mighty.

> StockOverflow: What if a newbie found this question? If I don't tell you to use XYZ function in Python, they might do something else instead! Like design and implement and entire less-than-ideal system!

> Me (thinking to myself): Good grief.


Looking at that graph though, what it tells me is that stackoverflow was entirely unaffected by ChatGPT, and the giant spike is just humanity collectively jumping on ChatGPT in addition to folks using Stackoverflow in exactly the same way as before?

Same for that "answers over time" graph: that's literally what you'd expect for a knowledge base. There has to be an initial trend upwards as the site starts to accumulate information (i.e. questions with answers, not just questions) and initially gains popularity as a place to get help by asking questions. But, as the number of questions and answers increases, the utility changes from a place to "get help by asking questions" to "a place to get help by searching for answers". At that point, the curve levels off, and as the number of typical beginner and intermediate questions die off because they've already been asked and answered many times, the number of new questions has to trend downward. It'll be slow, because we're constantly inventing new programming languages, libraries, frameworks, etc, but the majority of questions for those are simply duplicates of things asked about previous languages, libraries, and frameworks, with their answers being perfectly usable by folks looking for answers.

And this is exactly the graph we see: rather than a sign of decline, it's a graph that shows the change from a predominantly Q&A site to predominantly being a knowledge repository.

ChatGPT will, of course, have some impact on the graph, but the number of posts on Stackoverflow today that are "How do I X? I asked ChatGPT but that didn't help me" is staggering. ChatGPT may have initially driven down searches across the rest of the web, from Google to Stackoverflow to your mom's recipe website, but the novelty's already worn off, and overall it's done precious little to cut into the utility of Stackoverflow, as far as this data suggests.


It's a negative dynamic that starts from the popular, "low-hanging fruit" kinds of questions like "how do you subtract 2 dates in framework X". Such a question would gain many points both to the OP and the answerers. The best answer to a question like that is short, unambiguous and concise. Consequently, you have high-reputation users skilled in answering popular questions. Then the popular questions are mostly over and people start asking questions about a particular problem hidden in their piece of code, and high-rep users sneer at it because they see it as "not researched".


high-rep users sneer at it

That sneering and the general idea that some questions are worthy and others not is one of the most offputting things about SE, followed by duplicative and unhelpful me-too answers that are very obvious reputation farming. When I took up programming again after a long hiatus I found SE very helpful at first but got sick of it within a matter of months because the meta game/* is horrible.

* the social dynamics in a community driven website that are wholly orthogonal to and often end up subverting the site's stated purpose by leveraging the stated ethos and decision infrastructure in pursuit of selfish ends. Other examples include Wikipedia edit wars or abusive forms of legalism and political brinksmanship.


The biggest problem with SE for me, and this is related to the culture issues you're talking about, is that the site has no good way of deprecating "formerly correct" answers. Even if a better, more correct answer is posted later, the reputation system has a huge incumbency bias in favor of older answers that have accumulated upvotes by being the best available answer at the time.

Their knowledge repository is slowly rotting under the weight of having to ask every time "okay, is this correct-sounding, highly upvoted answer actually (still) correct, or is it 10 years out of date?"


Yeah this always turned me off of contributing in earnest to SE. The purpose is supposed to be a general purpose site with an answer to _all_ questions, but in practice it's just a site with answers to questions an expert might have. Almost everything below that level of technicality gets culled.


Not necessarily "not researched", rather "too localized". SE is not a place to ask to solve your specific problem, which will only litter the website because it will never help anyone else. Debugging and solving logical errors is one subset of that. Once the problem is about some kind of misunderstanding how some interface works, it can help others who will have a similar problem. And of course a high-level design problem is also a good question.


Maybe there needs to be a separate "bunny slope" SO site, and such questions just get bounced over there where they are topical. Though I don't know who that could write good answers would spend time doing so. The people for whom those are still interesting questions are not the ones everyone else should get advice from.


Yea I typically just point to a non SE forum in such too localized cases.


As a longtime contributor and moderator on SO this is, unfortunately, very true. The community has always been toxic, being a newbie on SO requires a really thick skin or being willing to withstand a lot of abuse to use the site (if you're asking new questions).

The fact that there's very no off topic also makes it nearly impossible to "build" a community out of the site as well, it's all too focused on just questions and answers and I feel no need to go back to talk to anyone because I've never met anyone at SO while I made many longtime friends in other forums.

It also saddens me that SO has killed all these smaller and more or less insular communities, i see very little movement on the old mailing lists and forums that are still up, all questions moved to SO in a way or another (even for non-English speaking communities) so I'm not sure if the site has been positive for the tech community in general.

I had so much coaching and help from people in the local and regional communities I was a part of when I was in college and later starting my career and now there's nothing like it around anymore because SO sucked all the oxygen around them. Not sure where i'd be if I was starting right now.


> At that point, the curve levels off, and as the number of typical questions die off because they've already been asked and answered many times, the number of new questions has to trend downward

Yes, absolutely, and the article addresses precisely that point. But the questions graph shows a drop of about 40% at exactly the point in time when ChatGPT launches, which is much too extreme to explain away like that.


Except the graph cuts off before 2023, so it tells us nothing about the actual effect it had, it cherry picked the initial drop off (or worse, they just used https://insights.stackoverflow.com/trends, didn't even look at the time axis, and went "this proves our point").

It's Q3 of 2023, if you're going to argue a decline, why are you not looking at the most important three quarters of data?


No it doesn't - note the axis marks are every two years.


Ah, true.

That said, some of these are going back up. Java and C#, for instance, are clearly trending up again, as is Swift. Python's down, but by 10%ish, not 40%. And JS is down a lot but that's the one language you'd expect ChatGPT to actually be the best at "helping" with, given that it's the web's own programming language, so it being trained disproportionately more on JS than anything else would be hardly surprising.

And then there's languages for which ChatGPT might as well not exist: Lisp, Go, Rust, etc. Heck, if we were to look at Rust, for instance, ChatGPT did literally nothing. Its rise has been, and continues to be, meteoric. And if we look at arduino questions, you might even conclude that ChatGPT was a factor in driving people to Stackoverflow, not away from it.

Data science is hard, and certainly much harder than most folks (especially those with a programming background) think it is.


This is a good point. With the infinite number of uses outside of programming, you wouldn’t expect to see SO interest correlate with GPT. And if it’s not negative that’s a win.

SO has accessibility problems for beginners and a meta game for more advanced users. But there is no evidence of a death spiral here.

If anything… GPT could help SO by relieving the massive number of basic questions that GPT actually can answer.


The simple questions are certainly the ones that ChatGPT is least likely to reconstruct wrong, so especially for super popular languages with huge bodies of work on the internet, you'd expect ChatGPT to act as a decent pre-filter, finally ensuring folks don't end up asking "how do I sort this list based on properties inside my objects" for the millionth time =)


I really don't buy it. There is considerably more interest in google trends for ChatGPT compared to StackOverflow. But I dont see one replacing the other. At least for me.

I do not believe the google trend graphs,like "the scariest graph", in this post accurately reflect a decline of SO.

There is a place for AI tools and StackOverflow is not the friendliest of places but the quality disparity between the answers you get when comparing the two is very clear to me.

> This year, overall, we’re seeing an average of ~5% less traffic compared to 2022. Stack Overflow remains a trusted resource for millions of developers and technologists. [...] Conversely, in April of this year, we saw an above average traffic decrease (~14%), which we can likely attribute to developers trying GPT-4 after it was released in March.

https://stackoverflow.blog/2023/08/08/insights-into-stack-ov...


The Google Trends graph stuck out to me. You're comparing something that didn't exist before December and had broad appeal, to something that people arrive at generally by searching a question. They aren't at all comparable. (I imagine you'd see the same graph if you compared ChatGPT with Facebook. Doesn't mean ChatGPT is replacing social media.)


Agreed. ChatGPT might overtake StackOverflow, but that first graph does not suggest it, because StackOverflow itself remains constant, it doesn't decline in that graph.


I should've been a great fit for stack exchange. I love technical communities and have been writing code for several decades. But they did not have a particularly welcoming experience for new contributors (perhaps by design?) so I wasn't ever really able to get the flywheel started and never ended up contributing much.

I wonder if there's a systems lesson to be learned here around communities that do a good job welcoming new people and creating a magical onboarding experience staying vibrant for longer.


I don't think that the bad newcomer experience is by design. But I do think that it could be a lot better with a redesign.

Here's how I've seen it played out before:

---

johnnydoe2002 posts a question titled "How do I get my code to work?" with two sentences explaining their problem and an attached screenshot of their code. Within two minutes it gets downvoted thrice, flagged seven times as vague, and modman♦ closes the question, but not before posting the comment reply "Welcome to SO! Thank you for your interest, blah blah, your question has been closed because it needs clarification, blah blah, [link to new user portal]"

OK maybe that was a bit hyperbolic and most new users aren't that bad, so let's say johnnydoe2002 is undeterred by this and actually formats and titles their question in a reasonable way. Within four minutes it has one short answer from repseeker and a long-form answer that is more of a code review from overeagerverbosity. The question gets no upvotes despite having two answers. Within seven minutes poweruser98 closes the question as duplicate because it received enough flags.

If I were johnnydoe2002, I would have a mix of feelings. On the one hand, someone (two people, in fact!) answered my question; on the other hand, it was summarily closed and nobody found it interesting. I can clearly tell that I am not a part of the "in-group."

But I can also commiserate with modman♦ and poweruser98. johnny's question only adds noise and they clearly didn't read the contribution guidelines. The mods and power users are willing to be nice, but they don't have the time to hold anyone's hand: in the past ten minutes they've had to deal with all sorts of questions from larryloe2001, minnymoe2003, and randyroe2000. I wouldn't be surprised if for every useful question on SO there are two that have to be closed.

---

I honestly think that changes in phrasing and formatting could help the new user experience a lot.

Like if instead of closing questions as duplicates, the mod/user in question would post an answer that says "I think this other thread answers your question" and the page is invisibly redirected everywhere else to the other thread. Essentially "shadow closing" so that it appears to the new user as if they just received immediate feedback in the form of a (helpful) answer, while not being told that they are bad because they dared post a duplicate.

Or instead of closing a question as "needs clarification," sending it back to the edit modal with some adjoining commentary or links that explains the clarification it needs.

I think that for a community with standards these exacting, most users will not be able to just pop in - which is difficult because we usually refer to SO in passing. But I also think that those exacting standards can be upheld without stomping over new users with downvotes and close votes.


Some of my most recent ChatGPT interactions:

    34.223.203.0/28 what is the end ip of this range?

    Write python code to strip out characters that are not alphanumeric or spaces

    How do make FastAPI not return nulls in the response model when values don't exist?

    Rewrite this javascript to python <snippet of javascript code>
Braindead stuff that I would normally lookup on StackOverflow but ChatGPT gives me the actual answer instead of me having to sift through comments. Also the last one is not something that's suited to StackOverflow. I feel I may be getting dumber, but also programming is not all that interesting to me that I would care to remember the details of any of these things.


The benefit of “sifting through answers” on StackOverflow is that you learn about different perspectives and about possible caveats. You tend to get a more complete picture. At least for me, that makes human-discussed content much more valuable.

It’s a bit like reading HN comment threads vs. asking an LLM for its take on a submission.


In theory this is right... But in practice I don't find that SO actually does a particularly good job of this. But nothing really does, unfortunately. When I want this kind of nuance, I end up digging through threads in all kinds of different places (including but not limited to SO) for ages.


I agree. Too often on Stack Overflow are incorrect answers upvoted, and correct answers are absent or buried.

With ChatGPT, when it provides incorrect answers I've had good luck explaining the error. ChatGPT usually revises answers into useful output.


Yes, it's the back and forth that I highly value, and that is entirely absent (indeed, impossible) on SO.

I am frequently saying things like "no, that's not right, ..." or "no, that's not what I meant, ..." or "could you keep this part, which seems good, but replace the rest with a different approach like ...".

Think how rude it would be to micro-manage a volunteer answer on SO!


All I learn whenever I follow a link there is that the powers that be have decided that ‘this question is off-topic for Stack-Overflow’ and it has already been answered in a location that is now unavailable.

The site is a big douchey middle finger. A dismissive know it all that actually doesn’t know all that much.


Wouldn't you normally chose the answer marked as answer marked on Stack Overflow or the one with the most points?

ChatGPT doesn't offer that kind of user feedback, so you have no measure if the answer is valid if you don't know the topic well enough


If I can find a question that matches mine exactly, sure. Otherwise due to the imprecision of searching on google, I'm often left to filter through results, and then in each result that seems relevant filtering through each answer that seems relevant, sometimes never getting a perfect match.

ChatGPT does offer user feedback of a kind though -- e.g. I can tell it "no, not like that, more like x" and it will take that into consideration.


This is something that does concern me for people who are new to software engineering and haven't developed any "taste". But for me, I can sniff out bad or just mediocre answers from the bot and ask it for better ones, or go seek them out elsewhere (like on SO).


I use chatgpt (and copilot) for this same kind of stuff.

I don't at all think it's making me dumber. I'm spending more of my time learning and doing the far more interesting and creative domain-specific parts of my work. That is, it's making me smarter by freeing up time to focus on things that matter a lot more. Just like any other tool I've learned to use to automate away drudgery in the past.


The eternal paradox of tools is that they make those who don't need the tool more efficient and tend to inhibit learning in those who do need them.

Take a pocket calculator for example. If you already know arithmetic, calculators make you more efficient, but they are a terrible way to teach arithmetic.


Yes I totally agree, and I think this is the hard part. I absolutely worry about this for people just entering software engineering. I rely on "taste" a huge amount to figure out whether I'm being told something useful or worthless, and I developed that intuition through years of experience not having an AI to ask.


[flagged]


For many people (maybe even most people) who write code, programming is just a tool. I know a lot of people who program, and it's the least enjoyable, least interesting part of their job. We need to be mindful, and quite careful, not to adopt an attitude of, "if you don't love it, and it's not your passion, stay out and let the "real programmers" do the work."

I absolutely despise painting. But I like doing home renovations myself.


I concede that this is an entirely fair point, and one that I did not consider (mostly because we're on "Hacker News".


Ironically, I think this is more true of "Hacker News" than on other more programming-focused forums. I would describe the ethos here as "innovative problem-solving using computer technology". Whether that involves programming is totally beside the point. It has largely been the case since this site launched that most of the solutions do involve programming, but there's no reason they must. I think lots of non-programmer entrepreneur and product manager types hang out here, because of exactly this.


What use of computer technology can you imagine that does not involve programming?


Cobbling together SaaS's, "no-code", building stuff using "prompt engineering" with AIs instead of writing code, innovative business models using pre-existing software, etc. etc.


Fair points, mostly.


Ah, yes — real programmers only work at the assembly level. Forget all this "rising levels of abstraction" nonsense.


A programmer's job is to solve problems, not waste time on doing 'mechanical' tasks, especially if a tool can handle them.


People do stuff for money - they don't always enjoy it. This is a story as old as time. It reminds me of the Bob Dylan song "Hurricane":

> Rubin could take a man out with just one punch

> But he never did like to talk about it all that much

> "It's my work" he'd say, "and I do it for pay

> And when it's over I'd just as soon go on my way"


"please stop programming" - wrong, wrong, wrong.

It's a business, and some people rely on it for an income, ONLY. His "sin," if that's what it was for you, was saying out loud what lots of people actually feel.


ML works well, when it is only trained on correct data.

ML trained on all the things ever written about programming is going to get things wrong quite a lot of the time, and you'll only have the context of its reply to you, for you to figure out whether the ML answer is correct. And of course, ML always wants to please you with an answer.

SO, on the other hand has accepted answers, and upvoted answers that are easier to decide on whether they're correct (or what you're actually looking for). And there's more nuance to it as well.


ML you can actually give context and a very specific question but SO you have to search and hopefully find a similar enough question that has an answer that isn't extremely outdated


You can give it context, and it will incorporate the context into the text of its response.

Good luck figuring out if it incorporated the context in the content of the answer though.

The internet already had too many low quality, misinformed pages. With Chat GPT, it now has an infinite number of pages like that. I’m not sure that makes a qualitative difference.


> ML works well, when it is only trained on correct data.

Is that correct? I would think that its ‘interpolation’ and ‘extrapolation’ capabilities can easily make it produce incorrect statements.

As a simple example, train it on “π is a number”, “1 is a number”, “3/7 is a number”, “1.3457 is a number”, etc.

and

“1 is a rational number”, “3/7 is a rational number”, “1.3457 is a rational number”, etc.

and I give you a good chance that it will ‘think’ “π is a rational number”.


There is also regression to the mean. Human experts can sort out through advice and by reflecting on it, pick only the best. ChatGPT will give you, at best, average opinion. It won't help you get to the top, because the top is an outlier.


I found it interesting (and kinda worrisome) that some fine-tuned Llama 2 models are trained with GPT-4 generated data. So if GPT-4 got something wrong, the error would propagate to a lot open source models.


Not mentioned in the article is the negativity found with asking questions on Stack Overflow, where your question is marked as duplicate if it's vaguely like something else, if it's not worded exactly correctly, etc.

A remarkable thing about asking ChatGPT questions is that you don't have to be defensive, you can just ask your question no matter how basic, dumb, or simple it is without justification.

Seems a bad state of things that the inhuman AI is extending more grace than the human community.


If you ask a bad question on SE (leave "bad" to be defined separately), you waste time of people who try to answer it, and of people who stumble across it hoping that it relates to them and solves their problem.

If you ask a bad question to ChatGPT, you waste no one's time but your own.

If answering SE was a job where you got paid, I would expect a lot more hand-holding (and some amount of kindness, to keep the revenue stream coming back), but instead, the people who answer questions do it more-or-less for fun. Having someone waste your time isn't fun.


In those cases it's easy enough to just move on to another question rather than shut down the discussion. It's not like anyone's being forced to answer.


But as an answerer, you will empathize more with the other answerers than the askers. You don't want the other answerers to also waste their time on this question, so you close it / downvote / mark it as duplicate.


But by being closed as a duplicate with the duplicate linked hundreds of future askers who arrive at that question in the future from a search engine are saved the trouble.


The posts being discussed are cases where the question is marked as duplicate for being vaguely similar but not actually the same. Of course marking literal duplicates as duplicates is fine.


The primary (95+%) use case for Stack Overflow is being the destination for a web search. "Bad" questions make signal/noise ratio worse.


On the other hand, I have no doubt that it takes a lot of effort to ensure that stack overflow is full of high quality answers. The tendency of any system like that is toward entropy. The moderators might be forgiven for being a little too judicious in pruning similar questions and splitting focus where the benefit to the whole is to keep answers focused in one place.


If you were to look at hundreds upon hundreds of easily-researchable questions titled "plz halp i am n00b" asked by people who didn't bother to apply a slightest amout of elementary analytic skill, you would be just as negative.


Which is why it's better to offload this to a bot who doesn't have that kind of emotional response.

> hundreds upon hundreds of easily-researchable questions titled "plz halp i am n00b" asked by people who didn't bother to apply a slightest amout of elementary analytic skill

I'm sure ChatGPT gets not just "hundreds upon hundreds" of these each day, but thousands upon thousands of them each hour. And it remains cheery about it. Which is a much better experience for its users. It just turns out to be a better tool for this job than relying on the good will of human volunteers.


The title: Inside the Decline of Stack Exchange

The content: A random guy looked at Google trend charts and fabricated some conclusions.

The opposite of "inside".


I would really like to see what Joel Spolsky and Jeff Atwood, the original creators of Stack Overflow, have to say about the current state of affairs. Their business model was recruiting by identifying people with specific skills, and it was effective. I hired and was hired that way.

(They sold it years ago to a South African private equity venture and made the pile of money they deserved.)


I think it was not really making money the whole time they operated it though and the job finding angle wasn’t really making that much money, neither was the private enterprise SO thing


Someone should feed an LLM all the Joel on Software articles and ask it to comment on Stack Overflow.

But a serious question: is Joel still in charge of SO? Has he retired?


They sold it and in retrospect they really nailed the timing of the sale.


Do y'all really use ChatGPT for coding questions? Every time I have tried, the answer I get is entirely wrong, and I end up going back to either first party documentation, or Stack Exchange.


What do you mean "entirely wrong"?

... that it doesn't work exactly "as printed" when copy-pasted into an IDE? Or that it fundamentally misses the intent of the question? Or that it got some important key details right and messed-up other things?

Usually, in my experience, it's the last one. Often all you need is a clue to get you thinking in the right direction. Interactively asking chat-gpt questions and critically evaluating it's answers has been far more helpful than I could have ever imagined.

Does it ALWAYS help? No.

Is it an exciting and emergent alternative to what we've had before? Hell yes.

The first party docs are, of course, the gold standard. But they're often far more detailed and canonical than folks are ready to absorb.

SO can be helpful IF you can find your question already there and the answers are still relevant, or if you manage to ask your question in exactly the right way and the duplicate-police haven't shut it down (regardless of whether it's actually dupe). Or if a kind soul has provided a clue for you in a comment as your question was downvoted into oblivion.

Another option, IMHO, is github issues/discussion. The library developers are often there and generous about helping people out (without the perverse and infantile incentives of gamification that afflicts SO). But this option should really only be reserved for very carefully asked questions. In a way it has a higher bar than SO (but without the negative reinforcement of downvotes).


Of course I do. I use it (GPT4) every day and it's a million times better than Stack Overflow ever was. I find it extremely strange that this is not the standard now. And no, the answers are correct 99.999% of the time, so I don't know what the hell you're doing.

If it counts, I have a SO account with >15k "reputation."


It really depends on what you are doing, in my experience. I often ask for code when I know there will be a decent amount of difficult to remember boilerplate. It will do a decent enough job to save some typing and get me a framework to work within.

It is pretty good at writing little DTOs based upon a sample JSON too.

But harder questions for poorly documented libraries will exhibit a lot of the same challenges as Googling.


Same experience: if you're trying to do something unusual (an obscure library, or a task that has minimal documentation and few usage examples in the wild), the results are usually bad, and it will very frequently try to call functions which don't exist.


My main use case for GT4 is simple/concise questions for common tooling: namely Docker, Git, Vim (chatting with a robot about Vim during long builds is kinda embarassing now that I think about it), Bash scripting and common Linux utils / config.

I am able to give a lot of information about what I need, and I'm experienced enough to know precisely what to ask for. There's also a huge amount of training data about these tools. I'm also experienced enough to be able to evaluate the output for correctness. I would never ask anything like "write a program in Python for <blank>, at this point.


Sure. Currently using it to help me figure out the data format liuliu[0] is using for their "Draw Things" SQL database, because I want to bulk export all my previous images and my entire knowledge of SQL can be written on half a postcard.

[0] Who is on HN; hello, nice app :) https://news.ycombinator.com/user?id=liuliu


I'm a 56 year-old product marketer who used ChatGPT (plus Midjourney) to build an entire SaaS application and landing page: creatormail.io

I've had some seasoned developers review my (ChatGPT's) Python and got positive feedback with some constructive suggestions.


The avatar people on the front page have 3, 4 and 6 fingers

so it checks out, it’s AI generated


This looks useful! Signing up.


For complex questions GPT4 works much better than 3.5.


Yes, but I use ChatGPT4 which is much better than ChatGPT 3.5. In my experience it's correct often enough to ask it first.


I use it as a starting point. I don't think I've ever actually used its answer directly, but I use it most days now to give me the skeleton of something I can then get working the way I want it from there. I've always been better at editing than filling in a blank page, so this has turned out to be a good fit for me.


There are some strategies to get it to work better. It’s great if you tell it to create a function that accepts (input) and produced (output). For example I recently had it come up with some date formatters that saved me a lot of time, eg, transform a date from MM-YYYY to “X months ago”, stuff like that.


>some date formatters that saved me a lot of time, eg, transform a date from MM-YYYY to “X months ago”

Obvs your use case may be wildly different, but I detest ms office for emails: 'received three days ago...Two days ago...'. Great, now i have to remember what today is and then do some mathematical gymnastics (i dont work m-f).


The only coding adjacent thing I use it for is for building Excel/Google Sheets formulas, and it works really well for that.


same here


The opening Google trends graph doesn't match the preceding claim about a material decline in usage at all... it says a lot more about the extraordinary rise of chatgpt (especially since most people aren't using chatgpt for programming questions). It feels like a "Facebook is dead" type of argument: popular but wrong.


This article does not really have any inside knowledge, nor much proof about decline. It is extrapolating and hypothesizing from graphs.


I mean fine, write an article that overweights recent events to weave a tale of the decline of Stack Exchange...but don't leave out the moderator strike, the data dump fiasco, or the CEO that simply doesn't get it. Jon Ericson's writing on the issue is much more informative. https://jlericson.com/


This is a bit pedantic, but there is a difference between:

1. Stack Exchange, a network of Q&A sites built around a single model

2. Stack Overflow, a specific Q&A site for questions about programming, on which that network was founded

Programming- and computing-related SE sites are the most popular, but they are not the only ones. ChatGPT cannot read a circuit schematic, should not be used for legal questions, probably won't recognize an obscure science fiction book based on one plot element you hazily remember, and in general is not a reliable expert in anything.

I say this is pedantic because if the programming sites "decline" they'll probably take the rest of the network with them, but as a long time denizen of EE.SE I would like if it people at least recognized that SE is used for more interesting things than "how do I do [basic programming task X]?".


If we believed ChatGPT was the cause of an SO decline, then I'd expect it to also cause a decline in Google searches for programming queries. Here's the Google Trends data for two super-common error codes and one more conceptual programming search query:

https://trends.google.com/trends/explore?date=today%205-y&q=...

Maybe a little something going on with typeerror? Or just noise. But it definitely doesn't look to me like evidence supporting the article's claim.


I've found contributing questions or answers to SE/SO to be a lot of like hanging out in IRC channels dedicated to C -- a lot of blowhards tend to hang around the watering hole, berating people for lack of knowledge for some reason or another, expressing it in a variety of in-group approved ways.

It becomes tiresome to wade through the self-important moderation to get to actual content. That said, spam and unmoderated channels are clearly worse too. I'm not sure how to get to something in the middle of that Laffer curve.


It follows the arc of most things including Quora and somewhat generally other popular ventures like Snap, Facebook, MySpace, AOL/AIM, and CompuServe. The initial Cambrian explosion slowly changes from cool obscurity to hype as time goes on and the shine fades.

There's also a supply and demand problem of double-ended Q&A sites: there's rarely direct incentives to give away subject matter expertise. There are indirect incentives such as reputation, but behind nicknames, this doesn't necessarily translate into industry reputation. Furthermore, there are potential human language and subject matter terminology difficulties of throwing geographically-, and industry-diverse groups of users together. The platform should attempt to ameliorate language differences and steer user-contributed content away from devolving into ego battles, bikeshedding, nitpicking, or handwaving.


One thing that's interesting to me about the consistent complaints about moderators is that SO moderators are elected and only really held accountable by other moderators.

I almost was a moderator for one of the SO sister sites, but I thought it was just too much thankless work, and I can understand why it yields petulant people who would rather see less content that conforms to their expectations than to create and make space for new ways of doing things.

The content itself is an attractant to people like that, once you have an established corpus to corral you perceive ownership and want to "protect" it.

I genuinely don't know if there's any way to fix what I see as inherent problems in online communities, that generate "open source" knowledge at least.


I wouldn't exaggerate the death of Stack Exchange, it seems like they have a nice moat and are well positioned to change the future licensing terms of its answers and forbid using them for training AI.

They have already banned machine generated content. When the question of training legality will be settled in courts - predictably, not to the advantage of the AI companies - Stack Exchange will be left with the human network generating the content, while AI companies will need to license any new answers it they don't want their models to remain stale and out of date.


That was a really interesting. I do worry about the effect of sort of aggregating forces impact on places where content is created. We've seen it with news and etc already.

Having said that with Stack Overflow the real reason I hit ChatGPT first in many cases is FRICTION. The UI, the users, the answers that are technically correct but "weird". It's a lot of filtering I have to do as a user that I don't do with ChatGPT or other AI tools.


You can generalize it to every market where the proxy owns the relationship with the client and captures the generated value. UI or whatever, if ChatGPT kills Stack Overflow, we will be in a situation where the shared knowledge will be at the level of 2021 and everything later will be with declining quality due to lack of data.

I'd say that it is in OpenAI's interest to pay for SO and Redit so that they don't kill the hen with the golden eggs.


You’re assuming the bulk of ChatGPT’s coding knowledge comes from stack overflow. I suspect training directly on code bases and documentation is more important.


No, ChatGPT isn’t AI in that sense. It can fire back documentation at you, but a lot of what it does is just look at the word it’s spit out, and pick the next word or snippet that looks right. Same thing with code. It just spits out what it’s trained to interpolate the next snippet to be.

Anything involving genuine reasoning needs to be done by a human. It’s a great documentation parser. A step up in search engines. But a thinking machine it is not (yet).


If this were true ChatGPT would be useless for any topic that doesn't have a vast Q&A dataset. The purpose of instruct fine tuning is to illicit the model to output in a question answer format. The bulk of the model's knowledge comes from the unstructured dataset.

I'm also not sure how "reasoning" at inference time is relevant here. You seem to be conflating unsupervised offline training with inference. The ironic thing is that the behavior you describe actually works, you can try it right now. Add code documentation to your context window and ChatGPT will happily write code using that documentation as reference.


not every product has good documentation nor the code is self explanatory. SO fills in the gap between a requirement and its translation in code. Only OpenAI can say for sure how important is SO for them, but I doubt that it is insignificant.


> the answers that are technically correct but "weird"

In ChatGPT's instance they can be downright wrong and I have to verify if what it's saying is indeed correct on many occasions.


I don't find this to be the case for answers I would otherwise be getting from Stack Overflow. I don't think I've experienced ChatGPT being incorrect about these kinds of straightforward programming questions.


Perhaps you and I have different level of trust for ChatGPT but I have to double verify its responses on topics that are unfamiliar to me still.


> It's a lot of filtering I have to do as a user

For me, 95% of the time it is enough to read the first couple lines of the question, then hit the "End" key and read the least-upvoted answer.


I guess I'm just used to it now, but I kind of like the UI. It's easy to skim the page to find the part I'm looking for.


"the users..."weird""

I would have chosen another word for them...


Author seems to be doing a lot of reasoning from a hockey-stick graph that's less than a year old. I'd give it time to see if that line keeps trending upward or the line crashes (as hype gives way to "Wait this tool is unreliable", which it may).


I forsee any social media driven board including that of tech to have an eventual decline. As that is the way of technology, something newer and more relevent of the masses that utilize arise.


To every rise, there's a fall. Wonder which platform is going to accommodate the know-it-alls of Stack Exchange. Reddit maybe.


to me it looks like its too early to draw conclusions from the programming question volume graph. But if it is indeed true that LLMs are leaving SO in the dust, thats crazy.

I’m in my 20’s and this is the first time I feel like my work routine and ways of interacting with tech is getting antiquated. God damn.


From my experience, interacting with stack overflow and other software related sister communities has always left a bitter taste in my mouth. Ignoring the elephant in the room that I do use SO everyday with some success, but it’s mostly replaced by ChatGPT.

The ridiculous (IMHO) moderation under the guise of “we have only one purpose” often didn’t even align with that goal.

The writing was on the wall.

So many questions that could have served answers are locked, deleted, and unfortunately, never asked due to the unpleasantness of dealing with the moderation.

I speak for myself.


Yeah their moderation is awful. So many times, I've found my exact question with a "closed, duplicate of x" note, where "x" is not my question at all and is entirely unhelpful to me.


This is exactly why I never click on it when I'm looking for something. It's never a solution that a search finds; it's always an question answered that it's a duplicate of something completely unrelated.


ChatGPT at least seems like it is trying to help me.

The meta games going on in SO are a whole lot more of a mess and ChatGPT is more than happy to hear my question a 3rd time and spit out the same answer as before but change a few things. SO, not so much.


I use chatgpt as a rubber duck who I’m suspicious of. Way too often it just comes up with fictional features


I really don't get this dependence problem. People want information to be 100% accurate?

This is the internet -- lies and false claims are the standard. It's up to individuals to corroborate that the info they're looking at is reliable. Google provides a list of relevant results to a keyword search -- every one of those results can be bogus and no one bats an eye. ChatGPT provides text output based on a text input and the text can be bogus. Why should ChatGPT be held to a higher standard than Google?


Is entirely possible to hit some search result for a real human had a code hallucination. And to the topic, on stackoverflow that would have been deleted, downvoted or corrected.

GPT will tell you it’s right… well, it’ll apologize, offer another solution that could also be wrong and now you are in a loop.


Only a computer can get stuck in a loop or a person who is insane. It's not like if you ask a person to divide by zero they will fall in to a coma or sit there computing.

I 100% believe that ChatGPT can output absolute garbage but for the same reason you don't cite Wikipedia/Google you won't be citing ChatGPT. Just like Wikipedia or Google It's just an excellent starting point to rapidly get a working draft going.


I get wrong answers too, but they're pretty easy to figure out fast and if I rephrase I get what I need.

It's not like I'm not validating SO answers, there's not any extra steps from my SO experience. Also SO likes to give oddly, technically correct, but also hard to work with code sometimes to the point of being unusable and I take someone else's example anyway. ChatGPT is really good at making super generic answers and quick iterations / alterations.

ChatGPT's power is often in the "conversation" where I add or take things away or change things "I need to break out X,Y,Z before I render it because..." and blamo I get new code. SO has none of that.


I’m going to steal that.

I’ve been trying to explain to people that GPT is good for subjective things, not object of things. But that didn’t really jive with the programming part that works OK for certain tasks.

Then I realized it does and programming is just subjective a lot of the time. You need a right answer but how you get there is open.

Using it as a spring board is good advice.


Aye. It's a scaffold or first draft at best.


I've found it refreshing to be able to ask a performance-related question and not get a trite reply about premature optimization. It's frustrating reading a well-researched, obviously informed question about a specific thing and the top reply is "Why do you need this? Did you profile it?". Nothing new is learned and the cynical part of me thinks it's just a race to get free Stack Overflow points.


Similarly with questions about doing X with Y where the top answer just pushes some library, a different language or outright changes X to something else (without answering the original question).

I feel like the people writing those answers should also be well aware that often you can't just change the problem or just add in a library to solve one specific problem on a whim. It's kind of demeaning to push those kinds of answers anyway rather than maybe question the task but provide a solution anyway.


The SO community culture rewards precise, concise and direct answers to simple and popular questions like "how do you subtract 2 dates in Python". In the first years, those were the low-hanging fruit and a source of big reputation points for some people. Unpopular questions regarding an obscure bug in the user's code or perhaps a misunderstanding of the OP aren't seen as "pretty" - they require answerers to understand the context of the particular problem and will not gain them many reputation points, even if they provide a useful answer to the OP.


I'm actually quite shocked going back and looking at old StackOverflow posts. At the time, I was supportive of such brutality, but it is clearly against modern norms regarding what is appropriate internet socialisation.

My take is that SO was born in a time where ethics among the primarily male and tech-fanatical audience of the internet was less developed. At the time, what we now would call toxicity was expected: we thought it was shameful to not ask a question properly, to waste others' time, to not have tried significantly before asking. Now, we have a different view: first, the asker may not be a technical person and may not be someone you would feel comfortable berating; second, even among hardcore tech people there is a much stronger emphasis on appropriate communication and mental health. I remembered stories of my step father giving his subordinates "bollockings" (British term for heavily and aggressively shouting at and disciplining someone who has made a mistake). Well, I work for him now, and he does not do this anymore. Everyone is quite scared of upsetting everyone else, everyone is more aware of the suffering of others, etc.

Somehow, SO has partly retained these aspects of the old internet, when almost all other spaces have shed them.

EDI: You can see the user "rejectfinite" has replied to you with a flagged comment which demonstrates how people still try to act out this kind of toxicity, but note the swiftness of his downvotes and his almost immediate flagging. We just don't tolerate that anymore. But we used to


> You can see the user “rejectfinite” has replied to you with a flagged comment which demonstrates how people still try to act out this kind of toxicity

A point made well. I saw that post.

I myself have been generously downvoted, which is understandable given the audience here on HN.


> we have only one purpose

The problem to me always seemed like the goals of the company didn’t line up with the goals of many (most?) of its users.


I agree, with the caveat that "most" must be referring to the larger number of visitors posting questions, not the larger number of answers posted by a smaller number of contributors, nor the even larger number of readers who find an existing question and answer, read the answer, and leave with minimal interaction.

Most question-askers - who actually post a question - want to have a response customized to their exact problem. ChatGPT is infinitely patient, if a million users give it the same prompt a hundred times a day it will provide each of them an answer. They want to interact with it like a Discord chatroom - post a question, get answer.

Most answer-posters want to avoid posting the same response over, and over, and over. They want other users to interact with it like an encyclopedia of FAQs - or rather, questions frequently pondered but only asked once.

I'd argue that most answer-seekers probably want a single, high-ranking canonical question and answer as well; they don't want low-quality copy-pasted single-line responses in a chatroom. Maybe they upvote the pre-existing question, maybe they're just anonymous viewers. Stack Overflow the organization, like this group, wants the best-written, researched, articulated response to any particular question to rank highly on Google.

What's for sure is that no one looking for an answer ever wants to see "Closed as duplicate". The question is how you keep that from happening, unfortunately it's probably untenable to do anything but have a small cadre of community moderators (with all the self-selection trouble that causes) closing questions subjectively marked as duplicates and intensely frustrating the people who posted those questions. Nor does anyone want to (nor are they able to) scroll up through 500GB of chat logs to find a prior Q&A by someone else.


Seconded.


IMHO their moderation practices is precisely why they are seeing this decline, like you said SO is a great resource to read but I've never asked a question outside of a handful of times over the years and every time the question was locked, deleted, etc.

On the one hand I get their argument that they want only the highest quality answers on there but in that heavy handed moderation they now have another problem which is no one outside of a handful of individuals actually wants to post or contribute.

I think if SO didn't have such heavy handed moderation practices they would have lower quality answers on there but they would also not be experiencing such a gradual decline.


An interesting thing to me about SO is how their focus on being Dev Centric ultimately led to some problematic internal culture issues.

This came up in a podcast episode I did and then turned up on twitter.

    At one point in 2013/2014, an exec asked me if any of the developers had ever treated me badly because I was a woman. My honest reply at the time was “not because I’m a woman, but they definitely have treated me badly because I’m not an engineer”.
https://twitter.com/lauradobrzynski/status/16646162421104271...

    Ben: This was a meetup at, I think it was Denver 2014. We were standing outside an arcade. The evening outing was a lot of pinball fun and air hockey. And one of our community managers was standing outside, and we got to talking, and he said to me, “You know, I was actually kind of surprised. You’re a really nice guy.”
https://corecursive.com/stack-overflow/

It's interesting to me as a warning.

I think treating developers really well is important. But SO seems like an interesting case where they allow entitled behaviour of some devs to get out of control and it led to problems.


Entitled "special" people are a problem everywhere.

Dev's who are labeled as high performers get greenfield projects that can be pumped out quickly / look good ... someone else does the maintenance, they get all the flack for an obvious bug that wasn't fixed even if they didn't make it.

The random executive who is thought to be highly successful who jumps from department to department spreading his fingerprint long enough to make an impact and leaves quickly enough to avoid any responsibility for the results.

I fear it is a human thing and people are really good at sensing when they can get away with things.


As time goes on, I more and more have the feeling that maintenance is 95% of the value in any software system.

A well-designed system at launch is not sufficient to provide sustained value, but competent ongoing maintenance is. Phrased another way, you can sustain value by maintaining a badly designed system but not by not maintaining a well-designed system.

Ideally systems are well designed and well maintained but we don’t always have that luxury.

Circling back to your point, it’s ironic that maintenance is the lowest-value activity in terms of big tech company reward systems.


Yesterday I saw a post on Lamba The Ultimate that, while a bit long-winded, does a great job addressing this issue. The article picks up in the section headed The Labor of Care, and, for me, the money quote is: ”In order to care for technological infrastructure, we need maintenance engineers, not just systems designers—and that means paying for people, not just for products.”

It's well worth the read for the second half, and I made a post to HN because I'd love to hear more discussion on the issue. (https://news.ycombinator.com/item?id=37116593).


Sometimes they can be well architected to a fault!

I worked on a periphery of a big mainframe system that was designed and implemented before I was born. I think release 1 was in 1975. It still runs really well and is hard to displace because operationally, even though the people who understood it fully are retired or dead, they left behind strong materials so a moderately intelligent person can run everything.

The “other” systems, old .Net, Java, etc that support the interfaces to the old app are hot garbage, and tend to steal the budget dollars that are needed to migrate off the 1970s stuff. My team had to fix one of these… argh.


When I worked tech support I got to know the "maintenance devs" pretty well. It was amazing how I'd end up on calls with the non-maintenance teams and they would insist X, Y, Z couldn't happen and no man ... maintenance devs knew it both could and was happening and would explain how and why.

God bless those other non maintenance devs, they were good, but man they could take an RFC and turn it on its head sometimes.


> But SO seems like an interesting case where they allow entitled behaviour of some devs to get out of control and it led to problems.

This is hard to quantify, and to make objective.

E.g. in a previous company HR once sent an email asking for feedback from employees, and one of my engineers gave some - polite, thoughtful, and clear. This caused quite an incident as the HR staff in question were not used to getting anything but encouragement in feedback, and I advised him to filter feedback through me, his manager.

Being "made to feel unwelcome" is both very real when you feel it, but looking slightly deeper it's not being made to feel unwelcome; it's feeling unwelcome. A mismatch of what's welcoming doesn't mean the person feeling unwelcome is correct; there may be no "correct" in that scenario. But it will be reported as though it is correct.


How would you have filtered feedback that is polite, thoughtful, and clear?


Good question :-) You start by remembering that a lot of people have constant praise - they bought a new Thing™? "Well done you!" Say their friends. Or "You look sensational today!" Or whatever it is. Hyper-positive praise is very common, and any hint of anything else may well be jarring.

Any improvement suggestions might need to either get thrown in the bin (even if they're valid and simple) or filtered through their manager, or have a guided conversation so they think they came up with it.

It just depends on the person and your relationship with them, really.


I guess what I am getting at is that, in this specific instance, I'm not sure there is anything to do. You either send it or not. It was polite, it was clear, and so on. Tonally, it was about as good as you can expect. So what is there to filter other than send/not-send?


I didn't mean I would literally filter the email into a shorter email. I just wouldn't send an email.


So, in this HR isn't getting any feedback because they've literally heard nothing but "Yay, you're the best!" for so long that any sense that they could do something, anything different is, comparatively, scathing and harsh criticism?

This just raises so many more questions, like why bother your employee to consider the question and formulate anything if the feedback doesn't reach HR? I can imagine being someone who actually considered the question, took time to write and re-write something, another pass for clarity, a check to make sure that nothing was going to sound accusatory. Writing the email, sending it off, and hoping that the time I took away from my actual job to do this would result in even the chance of a change. But it doesn't, it goes into the round file, apparently.

I don't know, I haven't had as many jobs as some people here, but I have been through all of those regime changes and heard "we want your feedback" over and over again, yet it never seems to go anywhere. I had months on a "strategic vision" committee where we were trying to formulate a mission statement by slurping up data, questions, and forms from employees, going through various iterations of it, but what came back had nothing to do with the content we had received.

Is this always just an exercise in making HR feel good about themselves, then, without hope of any improvement?


> So, in this HR isn't getting any feedback

No, I said I wouldn't send it in an email. Somewhat ironically, I wrote it clearly, but you took it differently and built a large rant on that incorrect foundation.

Next time I'll filter my comments on HN through my skip manager (-:


Nah, that's neither large for me nor a rant. It's a few paragraphs on the ongoing disappointment I have with the lack of honest communication up against the requests for honest communication. It's the "Does this dress make me look fat?" feedback request from HR.

I sometimes wonder what innovations would be required to make HR completely outsourceable.


You can outsource HR. I don't think it would make much difference if your HR person is a permanent employee, a contractor, or a consultant working for another company. Certainly it's not obvious that a change in contractual relationship would address anything you've been complaining about.


As a non-engineer user, I know I have regularly felt unwelcome in various ways.


corecursive.com is your podcast?


Yes! Please excuse the self plug. I find these culture issues interesting and didn't mean to highjack the thread.


[flagged]


I don't think there's a polite way to put this, but to be honest this comment is an excellent example of what the parent is talking about.

All companies have needs that engineers tend to be very poor at, and if the engineers are dominant and see the other members of the organization as parasites, that's a serious dysfunction and not by any means "normal".


Is there anything really special happening here? For a non-technical individual, it can be hard to tell when someone technical is providing value and when they are, to use the term in this post chain, being a parasite. A technical person has the same issues with non-technical people. I've seen different non-technical business areas have this issue with other non-technical areas. I've read reports where supposedly entire departments were cut and the result was that they mostly parasites and the business did better after losing them, along with times such a decision killed a company.

I'm not sure there is anything really special with engineers in this situation. It seems like one of many varieties of the same problem with telling who is producing value and who is faking the appearance of producing value.


Ultimately, the people making those decisions need to be technical enough to know. Having those decisions made purely by non-technical people is playing russian roulette. This is something that is in many ways easier in more technical domains because at least it's possible to semi-reliably see through the bullshit.


Yeah and I think an issue is how engineers think vs other people. What maybe seen as rude or negative is just an engineer optimizing for efficiency or technical correctness. Other people then tend to take things personally and attach emotion to situations where they are having to be corrected.

I have this play out repeatedly in my personal life. I've found I have to simply let people do the wrong things or fail a little bit and gently guide solutions. Otherwise there will be emotional pushback that entrenches their position.


"Optimising for efficiency" on a personal/personnel/social level is one of the most common forms of rudeness in every day life. It generally _is_ rudeness.


Only when that appreciation for optimization is not shared between all participants. It is equally correct that someone who is not interested in social minutia is having their time wasted by having it inflicted on them, and this, too, is rude.


Politeness, is of course, when you hate the other person so much that you waste their time with all the various social rituals.


> ...do the wrong things or fail a little bit and gently guide solutions

That's getting someone to change their mind 101. Not an engineer exclusive insight. I think a lot of "oh engineers are so rational that's why people think we're so rude" is just post hoc rationalization for poor social skills.


Based on what is it pretty clear that everything went downhill when engineers stopped calling the shots?


I don't know if it is normal or not, but it certainly shouldn't be.


I work for a 'hard' engineering company. Hard in the sense it makes physical things and does physical things. They don't give a shit about software and any of the engineering around it. And I consider that a pretty good deal from working at a finance place where I was doing support dev work on their data warehouse. There they just saw me as an expensive cost center (I was probably not paid enough in retro)

Anyway. Shoes on the other foot at SE.

What surprises me about SE is that it has all these smart people and a big community but it has let google and the parasites scrape it to death.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: