Hacker Newsnew | past | comments | ask | show | jobs | submit | wrl's commentslogin

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.

Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.


It’s also trivially solvable with idk, a length check, or any number of other things which don’t need to 100b parameters to calculate.

This was a problem at my last job. Boss kept suggesting shoving AI into features, and I kept pointing out we could make the features better with less effort using simple heuristics in a few lines of code, and skip adding AI altogether.

So much of it nowadays is like the blockchain craze, trying to use it as a solution for every problem until it sticks.


  > Boss kept suggesting shoving AI into features, and I kept pointing out we could make the features better with less effort using simple heuristics in a few lines of code
depending on what it is, it would probably also cost less money (no paying for token usage), use less electricity and be more reliable (less probabilistic, more deterministic), and easier to maintain (just fix the bug in the code vs prompt/input spelunking) as well.

there are definitely useful applications for end user features, but a lot of this is ordered from on-high top-down and product managers need to appease them...


... And the people at the top are only asking for it because it sounds really good to investors and shareholders. "Powered by AI" sounds way fancier and harder to replace than "power by simple string searches and other heuristics"

The problem isn't so much the five seconds, it is the muscle memory. You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone. I have been bitten before. Something like the parent described would be a huge improvement.

Granted, it seems the even better UX is to save what the user inputs and let them recover if they lost something important. That would also help for other things, like crashes, which have also burned me in the past. But tradeoffs, as always.


> You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone.

Wouldn't you just hit undo? Yeah, it's a bit obnoxious that Chrome for example uses cmd-shift-T to undo in this case instead of the application-wide undo stack, but I feel like the focus for improving software resilience to user error should continue to be on increasing the power of the undo stack (like it's been for more than 30 years so far), not trying to optimize what gets put in the undo stack in the first place.


> Wouldn't you just hit undo?

Because:

1. Undo is usually treated as an application-level concern, meaning that once the application has exited there is no specific undo, as it is normally though of, function available. The 'desktop environment' integration necessary for this isn't commonly found.

2. Even if the application is still running, it only helps if the browser has implemented it. You mention Chrome has it, which is good, but Chrome is pretty lousy about just about everything else, so... Pick your poison, I guess.

3. This was already mentioned as the better user experience anyway, albeit left open-ended for designers, so it is not exactly clear what you are trying to add. Did you randomly stop reading in the middle?


Now y'all are just analysing the UX of YouTube and Chrome.

The problem is that by agreeing to close the tab, you're agreeing to discard the comment. There's currently no way to bring it back. There's no way to undo.

AI can't fix that. There is Microsoft's "snapshot" thing but it's really just a waste of storage space.


I mean, it can. But so can a task runner that periodically saves writing to a clipboard history. The value is questionable, but throwing an LLM at it does feel overkill on terms of overhead.

Which is fine! That's me making the explicit choice that yes, I want to close this box and yes, I want to lose this data. I don't need an AI evaluating how important it thinks I am and second guessing my judgement call.

I tell the computer what to do, not the other way around.


You do, however, need to be able to tell the computer that you want to opt in (or out, I suppose) of being able to using AI to evaluate how important it thinks your work is. If you don’t have that option, it is, in fact, the computer telling you what to do. And why would you want the computer to tell you what to do?

>You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone.

I'm not sure we need even local AI's reading everything we do for what amounts to a skill issue.


You're quite right that those with skills have no need for computers, but for the rest of us there is no need for them to not have a good user experience.

I have the exact opposite muscle memory.

I think this is covered in the Bainbridge automation paper https://en.wikipedia.org/wiki/Ironies_of_Automation ... When the user doesn't have practiced context like you described, to be expected to suddenly have that practiced context to do the right thing in a surprise moment is untenable.

A rarer-ish chance to use this XKCD: https://xkcd.com/1205/

I'd put this in "save 5 seconds daily" to be generous. Remember that this is time saved over 5 years.


Are you also going to go into detail about the use of AI to generate the code?

Why would I? When you buy a car part, they don't print on the box they used AutoCAD in order to build it. When you rent a movie they don't talk about using DaVinci Resolve to edit it, right? People use AI now to build software. I don't think that's going to change any time soon.

I find it really funny that so many people who vibecode software do their best to cover the AI tracks, especially when it's open-source. I think it's because you all know how negative the public sentiment about AI is, and the sentiment continues to build.

Here you are talking not just about how you've used it, but also how you're planning to sell this as a plugin to musicians – who, as a group, are overwhelmingly averse to AI. Because if they weren't averse to AI, they'd just be using Suno.

Best of luck.


I was watching a video where Sean Costello the creator of Valhalla Reverbs was talking about the original Schroeder algorithm design for the first digital reverberator. Schroeder had to schedule time on an IBM time-sharing system days in advance. Then he'd have to write out the code in machine language. Then he had to drive 30 minutes to where the only DAC he had access to was in order to test out his algorithm. Repeat. We don't do that shit anymore.

How is this different?

I don't try to hide my AI tracks. I'll gladly tell anybody that AI helped me do it because it did such a fantastic job. I mean, that's literally what this post is about!

My plug-in sounds way better than the UM-282 which was hand-coded before AI was getting popular. That's all that matters!

Honestly I think you should re-examine your own position. I see you've written plugin software in the past and I'm sure you spent a long time on DSP algorithms and learning and understanding.

Well I did the same thing with web-based software for the last 25 years. The world doesn't give two shits man. The world is going to do what the world is going to do.

You're free to have your own opinion


> How is this different?

What Schroeder was doing wasn't fundamentally built on plagiarism and copyright laundering. The externalities of commercial LLMs are pretty well-documented at this point.

> I'll gladly tell anybody that AI helped me do it because it did such a fantastic job.

And yet you got defensive when I asked you about it. I stand by what I said – you're worried about how it reflects on you and your product. Justifiably so, considering the audience you're going to be selling to.

> That's all that matters!

If that's what you need to believe, I guess? Again – you want to sell vibecoded software to people who themselves are threatened by AI and you're hoping they won't notice or won't care.

> Honestly I think you should re-examine your own position. I see you've written plugin software in the past and I'm sure you spent a long time on DSP algorithms and learning and understanding.

I've spent plenty of time examining my own position and I have come to the conclusion that, no matter how good vibecoding is, it's fundamentally immoral and I judge its practitioners harshly.

> The world doesn't give two shits man. The world is going to do what the world is going to do.

And you're just along for the ride? Have a backbone, at least.


Brutal.

"opinionated and limited" in what way? I'm also curious to hear how AU differs appreciably from VST3, mostly at a conceptual level (as I'm already familiar with the low-level details of both APIs).


For instance VST3 does not support many to one, and only awkwardly supports two to one -- via second-class "side-chaining" -- sound processing architectures. But the need for such possibility may not be clear to those who only know the primary one-to-one audio flow architecture ubiquitous in digital audio workstation ("DAWWWWWWW") designs. VST3 neatly fits into this architecture. In 2025 I don't think more of this is particularly innovative. In contrast, the AudioUnit spec is open ended and has an internal graph audio flow design that can readily facilitate other signal processing architectures. If you don't want to think outside the "DAW", you don't have to, but some of us musicians do.


> For instance VST3 does not support many to one, and only awkwardly supports two to one -- via second-class "side-chaining" -- sound processing architectures

This is a limitation of your host and plugins and not of VST3, plugins can declare arbitrarily many input/output busses for audio and events for many-to-many connections. It's just that in practice, hosts don't like this, and JUCE has a busted interface for it.


This is really interesting – AU's notion of having separate input and output "elements" (buses, more or less) is one of the worst parts of the whole API.

I understand why historically these design decisions were made, but it's not like they really enable any functionality that the other APIs don't. It's just that since the host can call `Render` more than once per sample block (ideally the host would call it once per sample block per element, but there's nothing saying the host can't call it redundantly), there's additional bookkeeping that plugins have to do surrounding the `AudioTimeStamp`. And for what? There's nothing AU can do that the other formats can't.

If a plugin has multiple fully independent buses, the model mostly works, but if there's any interdependence then things get even more complicated. Say you have two separate stereo elements that don't have any routing between them, but there's MIDI input or sample-accurate parameter changes for the given block. Now you have to keep those events around until the host has moved on to the next block, which means the plugin has to maintain even more internal buffering. This sort of lifetime ambiguity was one of the worst parts of VST2. In VST2, how long does MIDI input last? "Until the next call to `process()`." In AUv2, how long do MIDI input data or parameter change events last? "All of the render calls of the next timestamp, but then not the timestamp after that." Great, thanks.

Modern plugins, upon receiving a `Render` for a new timestamp, will just render all of their elements at the same time, but they'll internally buffer all the outputs and then just copy the buffers out per-element-render call. So, it reduces down to the same thing that other APIs do, just with more pageantry.

And yet, plugin instances having to manage their own input connection types is somehow even worse. Again, I understand why it was done this way – allowing plugins to "pull" their own inputs lets an audio graph basically run itself with very little effort from the host – it can just call `Render` on the final node of the chain, and all of the inputs come along for free.

It's a compelling value proposition, but unfortunately it fully prevents any sort of graph-level multithreading. Modern hosts do all sorts of graph theory to determine which parts of the graph can be executed in parallel, and this means that the host has to be in charge of determining which plugins to render and when. Even Logic does this now. The "pull model" of an AU instance directly calling `Render` on its input connections is relic of the past.

Anyway. VST3, CLAP, even VST2 support multiple input and output buses (hell, one of my plugins has multiple output buses for "main out", "only transients", and "everything other than a transient") – it's just a question of host support and how they're broken out. Ironically, Logic is one of the clunkiest implementations of multi-out I've seen (Bitwig is far and away the best).


I am unfamiliar with the differences between these two at all, as someone who uses audio plugins but does not develop with them. What are the main differences and why is OP claiming that there are far better methods of doing so?


The short answer is that there really aren't. All extant audio plugin APIs/formats are basically ways of getting audio data into and out of a `process()` (sometimes called `Render`) function which gets called by the host application whenever it needs more audio from the plugin.

Every API has its own pageantry not just around the details of calling `process()`, but also exposing and manipulating things like parameters, bus configuration, state management, MIDI/note i/o, etc. There are differences in all of these (sometimes big differences), but there aren't any real crazy outliers.

At the end of the day, a plugin instance is a sort of "object", right? And the host calls methods on it. How the calls look varies considerably:

VST2 prescribes a big `switch()` statement in a "dispatcher" function, with different constants for the "command" (or "method", more or less). VST3 uses COM-like vtables and `QueryInterface`. CLAP uses a mechanism where a plugin (or host) is queried for an "extension" (identified by a string), and the query returns a const pointer to a vtable (or NULL if the host/plugin doesn't support the extension). AudioUnits has some spandrels of the old mac "Component Manager", including a `Lookup` method for mapping constants to function pointers (kind of similar to VST2 except it returns a function rather than dispatching to it directly), and then AU also has a "property" system with getters and setters for things like bus configuration, saving/loading state, parameter metadata, etc.

I'm not sure why OP is claiming that AU is somehow unopinionated or less limited. It doesn't support any particular extensibility that the other formats don't too.


Is there some software to convert VST3 to AU? Or do you develop both separately?

Someone else said

> Almost all VST plugins have an AU version (like 80%-90% or so, and 99% of the major ones).

Which I noticed as well, I wondered if that required a large time investment to support both or there's some API translation layers available.


Historically, there was an API translation layer that VST2 plugins used (called Symbiosis), but these days the vast majority of plugin devs use a framework like JUCE which has multiple different "client-side" API implementations (of VST2, and VST3, and etc) that all wrap around JUCE's native class hierarchy.

There's a few other frameworks floating around (Dplug for writing in D, a few others in C++), but JUCE is far and away the most common.


Me three would like to know. I'm producer/mixer who favors AU over VST3 plugins. Not for any opinionated reason. Merely because my experience is that they're slightly less error prone in my DAW.


> Clap doesn't allow describing plugin in a manifest (like VST3 and LV2 do). This allows to scan for plugins faster.

VST3 only recently gained the `moduleinfo.json` functionality and support is still materialising. Besides, hosts generally do a much better job about only scanning new plugins or ones that have changed, and hosts like Bitwig even do the scanning in the background. The manifest approach is cool, but in the end, plugin DLLs just shouldn't be doing any heavy lifting until they actually need to create an instance anyway.

> Also, CLAP uses 3 or 4 methods to represent MIDI data (MIDI1, MIDI1 + MPE, MIDI2, CLAP events). This requires to write several converters when implementing a host.

I've not done the host-side work, but the plugin-side work isn't too difficult. It's the same data, just represented differently. Disclaimer: I don't support MIDI2 yet, but I support the other 3.

On the other side, VST3 has some very strange design decisions that have led me to a lot of frustration.

Having separate parameter queues for sample-accurate automation requires plugins to treat their parameters in a very specific way (basically, you need audio-rate buffers for your parameter values that are as long as the maximum host block) in order to be written efficiently. Otherwise plugins basically have to "flatten" those queues into a single queue and handle them like MIDI events, or alternately just not handle intra-block parameter values at all. JUCE still doesn't handle these events at all, which leads to situations where a VST2 build of a JUCE plugin will actually handle automation better than the VST3 build (assuming the host is splitting blocks for better automation resolution, which all modern hosts do).

duped's comment about needing to create "dummy" parameters which get mapped to MIDI CCs is spot-on as well. JUCE does this. 2048 additional parameters (128 controllers * 16 channels) just to receive CCs. At least JUCE handles those parameters sample-accurately!

There's other issues too but I've lost track. At one point I sent a PR to Steinberg fixing a bug where their VST3 validator (!!!) was performing invalid (according to their own documentation) state transitions on plugins under test. It took me weeks to get the VST3 implementation in my plugin framework to a shippable state, and I still find more API and host bugs than I ever hit in VST2. VST3 is an absolute sprawl of API "design" and there are footguns in more places than there should be.

On the contrary, CLAP support took me around 2 days, 3 if we're being pedantic. The CLAP API isn't without its share of warts, but it's succinct and well-documented. There's a few little warts (the UI extension in particular should be more clear about when and how a plugin is supposed to actually open a window) but these are surmountable, and anecdotally I have only had to report one (maybe two) host bugs so far.

Again, disclaimer: I was involved in the early CLAP design efforts (largely the parameter extension) and am therefore biased, but if CLAP sucked I wouldn't shy away from saying it.


> Having separate parameter queues for sample-accurate automation requires plugins to treat their parameters in a very specific way (basically, you need audio-rate buffers for your parameter values that are as long as the maximum host block) in order to be written efficiently.

Oh I forgot about parameters. In VST3, the parameter changes use linear interpolation. So the DAW can predict how the plugin would interpret parameter value between changes and use this to create the best piece-wise linear approximation for automation curve (not merely sampling the curve every N samples uniformly which is not perfect).

CLAP has no defined interpolation method, and every plugin would interpolate the values in its own, unique and unpredictable way (and if you don't interpolate, there might be clicks). It is more difficult for a host to create an approximation for an automation curve. So with CLAP "sample-precise" might be not actually sample-precise.

I didn't find anything about interpolation in the spec, but it mentions interpolation for note expressions [1]:

> A plugin may make a choice to smooth note expression streams.

Also, I thought that maybe CLAP should have used the same event for parameters and note expessions? Aren't they very similar?

> duped's comment about needing to create "dummy" parameters which get mapped to MIDI CCs is spot-on as well. JUCE does this. 2048 additional parameters (128 controllers * 16 channels) just to receive CCs. At least JUCE handles those parameters sample-accurately!

What is the purpose of this? Why does plugin (unless it is a MIDI effect) need values for all controllers? Also, MIDI2 has more than 128 controllers anyway so this is a poor solution.

[1] https://github.com/free-audio/clap/blob/main/include/clap/ev...


> Oh I forgot about parameters. In VST3, the parameter changes use linear interpolation. So the DAW can predict how the plugin would interpret parameter value between changes and use this to create the best piece-wise linear approximation for automation curve (not merely sampling the curve every N samples uniformly which is not perfect).

Can you link to any code anywhere that actually correctly uses the VST3 linear interpolation code (other than the "again_sampleaccurate" sample in the VST3 SDK)? AU also supports "ramped" sample-accurate parameter events, but I am not aware of any hosts or plugins that use this functionality.

> CLAP has no defined interpolation method, and every plugin would interpolate the values in its own, unique and unpredictable way (and if you don't interpolate, there might be clicks). It is more difficult for a host to create an approximation for an automation curve. So with CLAP "sample-precise" might be not actually sample-precise.

Every plugin does already interpolate values on its own. It's how plugin authors address zipper noise. VST3 would require plugin authors to sometimes use their own smoothing and sometimes use the lerped values. Again, I'm not aware of any plugins that actually implement the linear interpolated method. I think Melda? It certainly requires both building directly on the VST3 SDK and also using the sample-accurate helpers (which only showed up in 2021 with 3.7.3).

Anyway, I maintain that this is a bad design. Plugins are already smoothing their parameters (usually with 1 pole smoothing filters) and switching to this whole interpolated sample accurate VST3 system requires a pretty serious restructuring.

Personally, I would have loved having a parameter event flag in CLAP indicating whether a plugin should smooth a parameter change or snap immediately to it (for better automation sync). Got overruled, oh well.

> What is the purpose of this? Why does plugin (unless it is a MIDI effect) need values for all controllers? Also, MIDI2 has more than 128 controllers anyway so this is a poor solution.

Steinberg has been saying exactly this since 2004 when VST3 was first released. Time and time again, plugin developers say that they do need them. For what? Couldn't tell you, honestly. In my case, I would have to migrate a synth plugin from MPE to also be able to use the VST3 note expressions system, and I absolutely cannot be bothered - note expressions look like a nightmare.

And this is the chief problem with VST3. The benefits are either dubious or poorly communicated, and the work required to implement these interfaces is absurd. Again – 3 days to implement CLAP vs 3 weeks to implement VST3 and I'm still finding VST3 host bugs routinely.


> 2048 additional parameters (128 controllers * 16 channels) just to receive CCs.

It's worth mentioning that it's 2 x 16 x 16,384 in MIDI 2, + 128 x 16 MIDI1 because you gotta support both.

But to quote Steinberg devs, "plugins shouldn't handle MIDI CC at all"


Yet more vibe-coded spam. For context, this is the same author who just got flagged off the front page for an LLM-written book about Lisp.


how is this spam? it’s posted the same way as any HN post and it’s not soliciting anything.


He’s been on a spree, “writing” and posting a dozen books or other projects in the last 2 days.


More context: haters are saying LLM generated content is bad. As a Senior Full Stack Developer with over 26 years of pro experience, I'm having the time of my life with these new AI powers and the doors and discussions they open. People are upvoting. I'm personally not getting anything from this open source sharing. You're the one calling spam. When you pay 200 a month to max your claude code output and really hunker down, think twice before you share its work. Not everyone understands.


It is absurd to pay $200 a month for it when GitHub Copilot has agentic development basically for free for open source developers, including GPT/Claude/Gemini models. If you want to waste your own money, fine, but don't expect others to waste a single dollar of their money when decent options are available at no cost.

Especially Claude is well known for wasting output tokens, for maximizing output token use when fewer tokens would do just fine, although other models too have picked up this disease as of late.

Yes, better models could produce better output, especially for a large project, but in my experience, the quality of the output depends 10x more on the clarity and refinement of the input. In the real world, the bulk of engineering is incremental, not one-shot.

Also, when I see a large repo with just three commits made all at once, it tells me that the vibe-coded output hasn't really been reviewed or refined over time, that it has not withstood the test of time at all, it hasn't received the love and attention it needs to make it mature, and so it cannot be trusted in this stage of its development.


wait what??? github copilot is free??? is that only free trial?


Its Pro plan is not free for everyone, but it is free for verified students, teachers, and maintainers of popular established open source projects. See https://github.com/features/copilot/plans . I clearly noted in my parent comment the constraint of open source developers. It's not a trial. If you get approved, you get re-evaluated each month.


> LLM-written book about Lisp.

I don't really care if you're an astronaut, time traveler, or a 15 year old. AI slop prompted by anyone is slop, and I'm a human with limited time which I'd rather not waste on slop


This comment itself reads like AI slop.


> developers who want to make improvements to the codebase that don't get prioritized

So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?

I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.

> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.

Blows my mind to think that those are the things you want to give to AI. I'd quit.


I completely agree. Those annoying UI bugs and the general need to refactor are often the same technical debt. If you want to make an already bad codebase even worse, giving those tasks to AI is probably the quickest and surest way.

The ability to untangle old bad code and make bigger broader plans for a codebase is precisely where you need human developers the most.


I'd give them to AI because they're generally just not getting done. I worked hard to get that frontend dev time to make those improvements, but there was no chance it was ever going to be enough. When you're talking about enterprise software, minor improvements to pageload speed do not move the needle on revenue. When you have a list of features that customers will actually pay for, those will get priority 100% of the time.

Everybody's job is to serve the company priorities. Engineers don't get to pick the tasks they want to do because they're getting paid to be there. I also have spent lots of time doing things I'd rather not do, because that's the nature of a job (plus a pile of stock options incentivizes me).

Better to have those tasks done by AI than not at all.


There are tons of small improvements I want to make to our codebase that would be great but take effort. Refactors are a great example. We hand those to Devin (or Cursor background agents, etc), review, and we're all happier for it. Our PM uses it fix those little UI annoyances all the time like "update the text on this button". It's been wonderful.


Really says something about the HN crowd that you're getting downvoted for this.


> forcing the inclusion of woke, DEI language in totally unrelated grants, even in areas such as maths

What? Can you show any examples of this?


We have two crazy policies:

- Forcing this irrelevant nonsense into maths grant applications.

- Cancelling the grant applications because they contain this nonsense.

And science is the loser.

.

One example:

This grant was for $500,000:

" Elliptic and Parabolic Partial Differential Equations

ABSTRACT Partial differential equations (PDE) are mathematical tools that are used to model natural phenomena like electromagnetism, astronomy, and fluid dynamics, for example. This project is concerned with understanding how the solutions to such equations behave. The Laplace equation

[...] Motivated by the goal of increasing participation from underrepresented groups [...]

The Laplace equation is a PDE that models steady-state phenomena in a truly uniform environment. Since the world that we live in is not an isotropic vacuum, the mathematical equations that govern many natural phenomena are often more complicated than Laplace’s equation. For example, the Schrodinger equation [...] "

https://www.nicheoverview.com/grant/?grant_id=nsf_2236491


Given the current administration is slashing so many programs it's clear there is a lot of language in many grants that has "DEI" or DEI-adjacent language. What is not clear is:

1) This is "forced" due to any government policy.

2) Any such policies could be attributed only to the Biden administration, or even any single administration.

I was curious so I stalked the PI in the linked grant, who happens to be female. Here is a relevant link, 3rd or so on Google: https://www.montana.edu/news/22806/montana-state-mathematics...

Burroughs said Davey stands out not just for her mathematical prowess but also for her commitment to students in all levels of study. Davey is co-director of the department’s Directed Reading Program, which pairs undergraduate students with graduate student mentors to read and discuss books on mutual subjects of interest over the course of a semester.

“It’s a way for us to connect graduate student mentors with undergraduates, who then see what math can look like outside the classroom,” Davey said.

...

A portion of the funding from the CAREER grant will enable Davey to extend her support to young mathematicians across the country. She will organize and conduct a summer workshop in Bozeman open to 40 upper-level graduate students and post-doctoral researchers from around the nation, particularly those from underrepresented groups. Cherry noted the outreach effort coincides with the college’s long-term goal of better serving underrepresented communities in the state.

So:

1. From that it does seem she is personally invested in making her subject more approachable.

2. The college itself has a goal of encouraging such outreach.

3. In case you think the university itself was influenced by the government policies, here's a "DEI" program from its website that started in 2016: https://www.montana.edu/provost/d_i.html -- if you browse around the site there are even more programs going farther back.

Additionally, I'm personally aware of "DEI" policies in universities going back more than two decades now, long before the term "DEI" was even coined.

Seems highly likely that the language in the grant was more due to the researcher's personal preferences and the institution's policies than anything any government policies.


If you trace back through the executive orders (on then off then on again...) regarding DEI, it starts with Obama. Biden did have several and it seems like things started really getting mandated and serious then, perhaps due to BLM. We seem to have an unstable oscillation going back and forth here until it breaks.

But yes it wasn't just top-down. The diversity statements in faculty hiring started about ten years ago and started becoming mandatory and used for screening at many places about five years ago.


Swiss maybe, but as a Dutch resident who was just in SF for the holidays... SF food is definitely more expensive.


Fair, my basis for comparison was the Amsterdam airport. I paid 35 euro for a soggy reheated sandwich and coffee. SFO has 18 dollar poke bowls.

Perhaps Amsterdam airport pricing is extensively marked up compared to local pricing (understandable). Geneva was just plain expensive.


airports are always a nightmare!

the other important thing to keep in mind is that in the EU in general, there's no added taxes on the bill, and tips are less of a thing here, so there's not a magical 20%+ hidden charge to factor in on everything you order.


> my basis for comparison was the Amsterdam airport

Your basis for how much people pay for groceries was how expensive a sandwich was at an airport?


We get signal wherever we can :) Boston Logan has about a 20-30% premium on normal groceries. SFO is probably closer to 5-10%. I assumed the premium could not be greater than 50% for Amsterdam airport.


At some point you have to be willing to admit you don't have any signal


A desiccated Swiss roastie will cost more than a michelin restaurant in SF.


> No, it's the opposite of that. Third-person pronouns are the ones people use to discuss third persons among themselves. The person who "wants their pronouns used" isn't party to the conversation.

That's also how names work, though. As in, "Are you coming to Billy's barbeque?" when Billy is not in the conversation.

> At no point in human history, in any human language I am aware of, has anyone ever gotten to choose their own third-person pronouns. It's absurd and bizarre.

What's the realistic difference between pronouns and nicknames? Like a "Richard" going by a "Rich", or "Rick"? That's their decision, right? Or someone choosing to go by their middle name rather than their first name?


Is the implication here that needle and syringe programs cause needles to be left everywhere?

Because, if so... let's just sit with that for a second and think it through.


maybe if you act just a little more condescending i will have a clue what you are trying to say


By what mechanism would reducing needle and syringe programs lead to fewer needles being left in public places? It's not like access to needles causes people to take up an injection drug habit.


He's going off the logic that the more services you provide for drug addicts, the more drug addicts you get. It's tied to the idea that an increase in homeless services attracts more homeless, which is true if you have a federalized system like the USA where the majority of homeless go to one place (or city).

But there's no evidence that drug services increase drug use.


there are different ways of accomplishing a needle program. around here they hand out packs of 100 without any stipulation. to everyone's surprise, our city is now littered in stray needles and requires constant cleanup. they're everywhere. the various programs do attract people from other states. this much is evident by our shelter logs which survey where they are from.

it's important to note that it's probably not a very large set of them that dump their needles publicly. this is outright sociopathic and evil, which i don't think most of them are. this distinction is important because the sociopathic homeless do make it a much more taboo issue to deal with.


Your local community implemented a thing poorly, hence nobody should ever attempt to improve anything? You spend a lot of time accusing others of dishonesty and condescending, but your own comments read much more in that spirit.

Housing support with social services on the side can be done well enough to help some fraction of the drug-using homeless recover. Some fraction may remain drug addicted, but now have a safe space, which is also an improvement. Some fraction may have lasting mental illnesses they struggle with, but even then a safe space for that struggle improves both the prognosis and the surrounding community.


>Your local community implemented a thing poorly, hence nobody should ever attempt to improve anything?

the original context was a ridiculous characterization of anyone being against a needle program. i am giving you one context of why someone might be against one, from the perspective of how it has been going in my city. whether standard protocol or poorly implemented, that is how it has been going.

>You spend a lot of time accusing others of dishonesty and condescending, but your own comments read much more in that spirit.

the condescension is hard to avoid when replies are posing snarky rhetorical questions which make understanding or addressing anything difficult. if you felt i've been dishonest, feel free to point it out. but preferably not in the way you did a second ago which took the form of "SO WE SHOULDN'T DO ANYTHING TO IMPROVE EVER?" which was clearly a good faith interpretation.


With respect, you should reread my original post, which I think you’ve taken pretty personally. It’s a simple statement — some people think that drug addicts are weak and immoral and deserve to die on the street. Another reply at the same time as yours said as much.

I don’t know how you get from that to “ridiculous characterization of anyone against a needle program.” Needle programs aren’t even the most important thing under discussion here, housing is. As you’re pointing out, knowingly or not, needle programs in isolation reduce some harms but increase others. Housing is often the root issue in harm reduction, but also one of the most expensive and politically charged.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: