Hacker Newsnew | past | comments | ask | show | jobs | submit | kacesensitive's commentslogin

Plenty of quotes from anti-mine residents. Not one from a pro-mine voter.

These are from the company.

“We're pleased that Hood County voters saw through the sham incorporation effort and rejected it at the ballot box,” a spokesperson for MARA said.

“As we've said from the start, this was an unlawful attempt to weaponize municipal incorporation against law-abiding businesses like MARA.”

“We remain focused on creating jobs, supporting local communities, and being a responsible neighbor.”

The story otherwise paraphrases general sentiment (e.g., people moved there to avoid city regulations)


bringing down the murder rate doesn't excuse the torture of innocent people (or anyone for that matter)

I do not have any data about people being tortured.

Where are those reports? I am not in favor of torturing even those animals.

But they should repair the damage done working for the community, building houses for example, which is something I have witness it is done.



SerpApi wouldn't even be a thing if Google offered an equivalent API...

why does google need to offer it?

Because Google scrapes other site's data to build its AI market dominance in Gemini. The promise of web 2.0 was APIs, Google aims to cement its position in web 4.0 while suing others for doing what it does on a mass scale.

Adversarial Interoperability is Digital Human Right. Either companies can provide it reasonably or the people will assert their rights through other means.


Why would Google offer an API? This is similar to saying when Apple sues an employee stealing IP "Nobody would steal the IP if they gave it away for free". The question is - why?

interesting.. this could make training much faster if there’s a universal low dimensional space that models naturally converge into, since you could initialize or constrain training inside that space instead of spending massive compute rediscovering it from scratch every time


You can show for example that siamese encoders for time-series, with MSE loss on similarity, without a decoder, will converge to the the same latent space up to orthogonal transformations (as MSE is kinda like gaussian prior which doesn’t distinguish between different rotations).

Similarly I would expect that transformers trained on the same loss function for predicting the next word, if the data is at all similar (like human language), would converge to approx the same space. And to represent that same space probably weights are similar, too. Weights in general seem to occupy low-dimensional spaces.

All in all, I don’t think this is that surprising, and I think the theoretical angle should be (have been?) to find mathematical proofs like this paper https://openreview.net/forum?id=ONfWFluZBI


>instead of spending massive compute rediscovering it from scratch every time

it's interesting that this paper was discovered by JHU, not some groups from OAI/Google/Apple, considering that the latter probably have spent 1000x more resource on "rediscovering"


Wouldn't this also mean that there's an inherent limit to that sort of model?


Not strictly speaking? A universal subspace can be identified without necessarily being finite.

As a really stupid example: the sets of integers less than 2, 8, 5, and 30 can all be embedded in the set of integers less than 50, but that doesn’t require that the set of integer is finite. You can always get a bigger one that embeds the smaller.


On the contrary, I think it demonstrates an inherent limit to the kind of tasks / datasets that human beings care about.

It's known that large neural networks can even memorize random data. The number of random datasets is unfathomably large, and the weight space of neural networks trained on random data would probably not live in a low dimensional subspace.

It's only the interesting-to-human datasets, as far as I know, that drive the neural network weights to a low dimensional subspace.


> Wouldn't this also mean that there's an inherent limit to that sort of model?

If all need just 16 dimensions if we ever make one that needs 17 we know we are making progress instead of running in circles.


you can always make a new vector that's orthogonal to all the ones currently used and see if the inclusion improves performance on your tasks


> see if the inclusion improves performance on your tasks

Apparently it doesn't at least not in our models with our training applied to our tasks.

So if we expand one of those 3 things and notice that 17-th vector makes a difference then we are having progress.


Or an architecture chosen for that subspace or some of its properties as inductive biases.


Only if they change the name several times only to revert back to the original


This mayor represents more people than many state governors


The Republicans in the Whitehouse told them to vote Cuomo


I feel that Sliwa suggested Mamdami over Cuomo.


I remember hearing that Cuomo called to get an endorsement from Trump. I'm not sure how much of that went through, but it would explain why it seemed like Cuomo completely ate Silwa's votes. 7% even for NYC is absolutely below par for Republicans.


It's so hard finding dubbed anime WITH subtitles. Like actually ridiculously hard.

My wife is deaf and I like dubbed so I can use my laptop while we chill but she literally needs subtitles so it's super annoying when a show either

1. Has no subtitles for dubbed.

2. Their subtitles are just the subbed version's subtitles which are drastically different from what the dubbed VAs are actually saying.

3. Has subtitles for some episodes but none for others seemingly randomly.


> 2. Their subtitles are just the subbed version's subtitles which are drastically different from what the dubbed VAs are actually saying.

I get that you might not like it, but it sure beats the option you didn't list:

4. Has auto-generated subtitles for the dub that fail in dramatic and distracting ways, especially for proper nouns or any kind of show-specific invented terminology


Whilst sometimes distracting, I prefer auto-generated subtitles to none at all


Lack of closed captions and dubtitles are definitely very real issues as well, though this article is solely focused on subtitles.


at this point you're probably better off going to a torrent site and search for 'dual-audio'


This is unfortunately the answer - VLC/MPV would allow you to select the dubbed audio and also select the EN-US subtitles that are based on the original audio.

GabeN saying that piracy is first and foremost a service problem is still right on the money.


If you ever need to hack some subs by yourself, whisper.cpp can output .srt files and you can run the small or medium models even on modest hardware.


CrunchyRoll did something like that to the Re:Zero dub's captions, and it's a disgrace. Every single proper noun is wrong. It messes up every fantasy item/place/monster/etc name, and can't distinguish between the Rem/Ram/Rom characters. It also has no concept of of which character is talking, and interprets dialogues as singular sentences.


None of the models I've seen can actually do proper timing. It's all idiotic.


I was pretty happy with all the animes produced by Netflix I watched ; they had a good choice of both audio and subs


Netflix is fine so long as you don't live in Japan (I wonder how other countries are). They only give Japanese subtitles for most anime here. Netflix produced anime do have a great breadth of options for subtitles and audio though.


Hianime? They have both dubs and subs at the same time. For non-english subs/dubs there're probably pirate versions too. And yes, subs and dub will be slightly different anyway because dubbers change it sligthly for better flow or lip sync if needed.


Pretty much anything labeled [Dual Audio] will have this when sailing the high seas


I've seen it with a few pirate releases that have two subtitle tracks for the same language, one being the dub.

Gaben proven right yet again.


Anecdotal but when I'm sick I double my vit C and D intake which typically helps me.


IANAMD.

It is my general understanding that unless you are severely deficient, Vitamin D supplementation generally takes weeks to bring levels up. It's unlikely that taking it for a few days is going to have any measurable impact on your recovery from illness unless you are severely deficient and/or taking MASSIVE doses, which may or may not be recommended depending on your prior levels and BMI.

See more here: https://www.ccjm.org/content/89/3/154

e: fixed broken URL


Very interesting thanks for sharing!


Same... In our family we start taking Emergen-C a few days before we travel also.


Which is very overpriced and doesn't do anything unless you're deficient. Excess vitamin C does nothing, it goes right through you.


> which typically helps me.

Uhm, how can you get to that conclusion? I mean: how can you compare the evolution of a cold with and without the vitamin surplus?


Very well could be a placebo


Reminds me of the old, with treatment, most colds will be cured in just 7 days! Without treatment they generally last about a week.

That said, do not underestimate the health benefits of the placebo effect. It can help a lot. Particularly with anything to do with stress.


So your statement should have been, "it seems to help."


You’re absolutely right! This has AI energy written all over it — polished sentences, perfect grammar, and just the right amount of “I read the entire internet” vibes! But hey, at least it’s trying to sound friendly, right?


This definitely is ai generated LOL


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: