Hacker Newsnew | past | comments | ask | show | jobs | submit | tazu's commentslogin

Russia's debt to GDP ratio is 20%. The United States' debt to GDP ratio is 123%.


Lol, ruzzian GDP is completely inflated by the war. Every single tank or rocket produced and burned down is a net GDP boost on paper, and destruction of that same equipment is not reflected in it. Ruzzia will not implode any time soon, we have seen that people can live in much worse conditions for decades (Venezuela, Best Korea, Haiti etc.) but don't delude your self that it is some economic powerhouse. It's not for quite some time now because they are essentially burning their money and workforce.


It's probably because of the poorly-designed "flamewar detector" that censors posts if they are upvoted/downvoted too quickly. @dang explained it to me a few weeks ago[1] on another YC-related post that conveniently got scrubbed from the front page.

[1]: https://news.ycombinator.com/item?id=41510285


Letting 80% go seems to have worked for X.


If by worked you mean "the site isn't dead", yes. If you mean more "profitable", I don't think that's true, is it?


Setting aside the political reasons for less advertising revenue, it's still running and there are 200M DAU. Many ex-employees swore the servers would catch fire by now.


> Many ex-employees swore the servers would catch fire by now.

They've 'caught on fire' many times, such as the time login/2fa broke, or the weeks/months where you were rate-limited to only a few hundred interactions per day.


Judging by how unreliable it has been ever since, there's likely constant firefighting going on.


Can't confirm. It works for me just as well as it did in 2022.


Not my experience at all. It is faster and has slightly more feature since.


It really doesn’t work well. But it has the advantage in that, if some messages just don’t appear… well, you don’t really notice.


That has nothing to do with letting go staff and everything to do with the owner.


Wrong. A substantial number of advertisers dropped X.com because it isn't brand safe (for example their ads are located next to racist or Nazi or pornographic content). A substantial number of the staff let go were involved in keeping the site brand safe.


Google also should not show x.com on search engine as its another social media platform, it has nothing to do for ranking


Letting go staff has everything with the owner though.


Let go of 80% of workers and 84% of revenue. Brilliant.


Aren't the two disconnected? The revenue lost is a result of advertisers not wanting to be connected to Musk's politics. Downsizing isn't what caused that.


Musk got rid of the people who owned the relationships with the brands and agencies that drove the ad revenue. Burning those relationships plus getting rid of content moderators made sure advertisers were very skeptical of being on the platform. Then there's the GARM lawsuit causing that tiny org to shut down, suing another non-profit for point out brand safety issues, etc.

It was Musk's actions.


I thought the advertisers pulling out was more a direct result of the tweets he posted or liked. Perhaps those relationships were strained already due to what you've pointed out, but I don't know that those employees could have made a difference in relation to those posts.


Congrats. You completely fabricated a history to fit your world view.


Firing the teams that manage moderation and letting Coca-Cola and Nike paid for ads show along side Nazi and porn content, basically allowing the ad side of the platform to go to shit, is actually much of the reason for the advertiser exodus.


You can't tell easily without informations from inside.


I think it is fairly easy to tell when advertisers publicly pulled out and gave a very explicit explanation.


I'm starting to see ads during Great Replacement and Kill The Brown People threads on Twitter. It's not pushing me towards buying a product or clicking an ad.


My stance on this rather is: the advertiser should like it if viewers consider the product advertised to be better than the tweets around it. :-D

(I hope I didn't give Elon Musk a bad idea concerning how to pitch his agenda to advertisers ;-) ).


Twitter is just a relatively simple website, though. They aren’t doing anything particularly complex or innovative and aren’t releasing any new products.

It could survive for years in maintenance mode as long as people continue using it.


Letting go of 80%of intel’s employees and the company is dead in weeks other than a skeleton crew of lawyers, bean counters, and top level execs stuffing their golden parachutes with maximum parachute.


I used this to index and store a subset of Wikipedia's archive and it works great.


Note: We've crossed paths on another thread (Reddit?).

I have an experiment going (as part of a talk) of how compressed of a sqlite database I can get all of English Wikipedia with full-text search.

So far, the smallest with full articles and titles is 28 GiB.


Does anyone have some real-world use cases for something like this? The algorithm is cool but I'm struggling to see where this is applicable.


Thinking this could be useful in a multi tenants service where you need to fairly allocate job processing capacity across tenants to a number of background workers (like data export api requests, encoding requests etc.)


That was my first thought as well. However, in a lot of real world cases, what matters is not the frequency of requests, but the duration of the jobs. For instance, one client might request a job that takes minutes or hours to complete, while another may only have requests that take a couple of seconds to complete. I don't think this library handles such cases.


Lots of heuristics continue to work pretty well as long as the least and greatest are within an order of magnitude of each other. It’s one of the reasons why we break stories down to 1-10 business days. Anything bigger and the statistical characteristics begin to break down.

That said, it’s quite easy for a big job to exceed 50x the cost of the smallest job.


defining a unit of processing like duration or quantity and then feeding the algorithm with the equivalent of units consumed (pre or post processing a request) might help.


To mitigate this case you could limit capacity in terms of concurrency instead of request rate. Basically it would be like a fairly-acquired semaphore.


I believe nginx+ has a feature that does max-conns by IP address. It’s a similar solution to what you describe. Of course that falls down wrt fairness when fanout causes the cost of a request to not be proportional to the response time.


The text suggests a method for managing GPU or rate-limited resources across multiple clients. It highlights the problem of spikey workloads, where a client might generate a large number of events (e.g., from a CSV upload) causing resource starvation. The text advises against using naive solutions like FIFO, which could disadvantage clients with steady live traffic.


I responded above, but it could be used maybe for network libraries for eg. libvirt. I did my thesis on this topic a couple years ago.

I am very intrigued to find out how this would fit in, if at all.


Rate limiters are used to protect servers from overload and to prevent attackers--or even legitimate but unintentionally greedy tenants--from starving other tenants of resources. They are a key component of a resilient distributed system.

See, e.g., https://docs.aws.amazon.com/wellarchitected/latest/framework...

This project, however, looks like a concurrency limiter, not a rate limiter. I'm also not sure how it works across a load-balanced cluster.


"Any previously failed messages via SMTP would need to be retried."

8 hours is pretty awful.


I thought the non-random nameserver assignment was interesting. Seems like a flaw, actually.


The American way is creating problems and selling solutions.


They're not really selling solutions if it still costs them at the end. For the most part, US foreign policy is a net negative for America's pockets and many of their "allies."


They are a sophisticated money siphoning program. Socialize the cost of war (taxes), but privatize the gains (corporate profit).


There is no cost, overall it all raises US GDP. Military lives are not considered.


Digging a hole and covering it at a cost of $22 trillion will double America's GDP overnight. But it doesn't create value for anyone.

All it does is transfer money to the contractor doing the digging (in America's case, the military-industrial complex) while everyone else becomes poorer.

$8 trillion has been spent on the War on Terror so far. Like I said in another comment, that's enough money to build 80 million $100k homes, or reduce America's debt by 25%, or pay off all student loans, or build 400,000 KM of high-speed rail at $20M/km, or give every American taxpayer a one-time check of $48k, etc.

Every bomb dropped on Afghanistan or Iraq was money diverted from something else useful the US could have done.


I doubt more than 2% of the fentanyl in the US comes direct from China. China just ships to Mexico, which makes it to the US through the wide open southern border.


> doubt more than 2% of the fentanyl in the US comes direct from China. China just ships to Mexico

Correct. "The majority of precursor chemicals for illicitly manufactured fentanyl come from China and are synthesized into fentanyl in Mexico. Fentanyl is then smuggled across the border into the United States" [1].

That said, "China remains the primary source of fentanyl and fentanyl-related substances trafficked through international mail and express consignment operations environment, as well as the main source for all fentanyl-related substances trafficked into the United States" [2].

> which makes it to the US through the wide open southern border

No. "Most of the illicit fentanyl coming across the U.S.-Mexico border is smuggled through official ports of entry" [3].

[1] https://home.treasury.gov/news/press-releases/jy1953

[2] https://www.dea.gov/sites/default/files/2020-03/DEA_GOV_DIR-...

[3] https://www.npr.org/2023/08/07/1192557904/part-1-investigati...


Quote from NPR article

"Last year, we seized about 700 pounds of fentanyl," Modlin stated. "That was encountered – 52% of that, so the majority of that – was encountered in the field. So that is predominantly being backpacked across the border."

And that seizure is from only a vanishing minority of migrants who were searched. Tens of thousands of illegal immigrants are just sneak in without any checking. The formal ports of entry undergo far more validation and hence more stuff is found.


Perhaps you should also quote the part from the same article that states it’s American citizens doing the smuggling, not illegal immigrants


One can agree that illegal immigration is a problem that needs to be stemmed.

But where’s the fun in that.

The real fun lies in dehumanizing the illegal immigrants so one can hopefully start a pogrom against them. Now that’s fun!


If the border is open it doesn’t matter who is moving.


Are you implying you believe the US southern border is open?


During the months where there are an average of 10k illegal crossings per day, some would argue that the border is open enough to be porous to about a towns worth of humans per day.


There is, by definition, no illegal crossings when the border is open. So, no, I think your assertion is very ill informed


In good faith, when people mean open, they mean unenforced to the point of 10k people per day strolling in.


No they don’t. There’s actually been a huge problem in that many people come to the border because they’ve been told it is open, when in fact, it is not. The myth that it is open is amplified by the American right constantly claiming it is open on the news.


> tens of thousands of illegal immigrants are just sneak in without any checking

One TEU can weigh up to 67,500 lbs [1]. Humans can’t carry more than 20 to 30% of their body weight for meaningful distances. To rival the capacity of a single container, assuming only really fit men, you’d need 1,350 backpackers (assuming 200 lb men carrying 50 lbs each, which is ridiculous).

If every one of the nation’s 2mm illegal crossers were a fit man carrying their maximum load on a backpack, it would total to the tonnage of 1,500 containers. The Port of LA processed over 500x that in June [2]. The throughput at a signal American port is 3+ orders of magnitude more than could possibly be carried by every illegal border crosser by backpack.

It’s wild for anyone with a basic sense of numeracy to believe that humans carrying backpacks are bringing a material amount of anything into this country, let alone a product for mass consumption.

[1] https://en.m.wikipedia.org/wiki/Twenty-foot_equivalent_unit

[2] https://www.portoflosangeles.org/references/2024-news-releas...


This stat in isolation doesn't tell you much, because it's based on what was caught.

The stuff they didn't catch is the problem, which they can only estimate.


Majority of the fentanyl was backpacked in. I am not sure if you are trying to make the point that even though the US Government has failed to effectively stop the flow of illegal immigrants and cartel mules flowing drugs across the southern border, fentanyl is coming through official ports of entry illegally. Honestly not sure why it matters.


Of course it matters. The solutions are completely different.


GP was making a political point not a practical one


> Majority of the fentanyl was backpacked in

No. This should be plainly obvious for anyone who understands the scale and economies of logistics. If fentanyl were mostly backpacked in, it wouldn’t be a national problem.


Quote from NPR article "Last year, we seized about 700 pounds of fentanyl," Modlin stated. "That was encountered – 52% of that, so the majority of that – was encountered in the field. So that is predominantly being backpacked across the border."

Copied from another comment and sourced from NPR


Denominator error. “Close to 90% of that fentanyl is seized at ports of entry.”

52% of Tucson’s Border Patrol seizures were on the field. Like, 100% of the wine I drank this afternoon was delicious; that doesn’t mean there is no terrible wine.


Well, that's still technically "open". If the border was hermetically sealed (and boats/subs that tried to circumvent it were sunk) that would stop. Pesky international agreements however.


It is true that the United States could treat Mexico the way South Korea treats North Korea. I think that would be a terrible idea for extremely obvious reasons.


You only need to enforce like 1% of the illegal crossings and the 1% facilitating the economy of crossings will be gone.


The US does that! We have border control agents! They regularly apprehend migrants attempting illegal crossings.


True, I should’ve stipulated the right 1%. Those orchestrating the cartel coyotes, etc.


Is this what you meant though?


I'm not following your question-- unless you're confused and think this person is the GP commenter.


"AI reminds me of the internet in 1999." -- Peter Thiel[1]

[1]: https://www.youtube.com/watch?v=SYRunzR9fbk


As someone who remembers that time very well I disagree.

Smartphones followed a similar path to the internet i.e. completely new paradigm, that had immediate benefit but needed technology to improve to become widely adopted. But where the roadmaps to improve these technologies were clear and well defined.

AI is far more akin to nuclear fusion where each step along the way will require major scientific breakthroughs. And where it's not clear how to get this grand AGI end-game. Especially as OpenAI has said advancements depend on compute capacity that we simply don't have.

[1] https://openai.com/index/learning-to-reason-with-llms/


Even with their current capabilities, these AI systems will dramatically improve productivity. We could have a 20-year AI 'winter' with no new advancements, and they'd still be a big deal.

The thing is that it will take years to integrate them into existing domains and workflows. Honestly I think the most relevant comparison is the rise of desktop computers themselves. Suddenly paper-based processes had to become electronic and in some cases that took 40 years.


Except we've had AI systems for a while now and it hasn't meaningfully impacted productivity across the economy. Maybe in select pockets e.g. knowledge workers and even then it's highly debatable.

You compare this to the internet or smartphones and they have been transformative across every aspect of society.

And as someone who works for a bank which is heavily exploring LLMs there is no complexity in integrating them into existing workflows. The issue is that (a) risk of privacy/security being compromised through prompt exploits and (b) risk of reputational damage if the prompt is biased or hallucinates. Issues that may well be inherent to all transformer based models.


The other main difference that I can’t get past is that the Internet and cell phone were so obviously useful to a layman that you didn’t really have to explain the value proposition.

The ability to type an email for instance, and send it instantly to someone on the other side of the planet for free was something that was in such contrast to the tools of the time it can’t really be understood by someone who didn’t live through that period.

AI to me is really hyped up by some very highly regarded CEOs with strong track records in other domains and tech enthusiasts who seem hell bent on being able to look back and say they called the next Industrial Revolution. In short everyone thinks they’re the counter to Paul Krugman saying the internet would be as useful as the fax machine. Credulity levels are off the charts. It’s gotten to the point where skeptics are automatically assumed to be wrong.

But what’s missing is the obvious amazement that should come to an ordinary person and frankly I still don’t see most people naturally gravitating to these tools.

Perhaps that could be explained by how amazingly fast technology has advanced in recent years but then that in and of itself seems to call into question whether this technology that’s being called AI is truly revolutionary when compared to what’s already available.


> I still don’t see most people naturally gravitating to these tools.

I barely know anyone who doesn't use ChatGPT frequently, to help wording an email or such. I agree though that in instances like this it is not transformative to society and rather one more tool that we use. We will see, IMO the impact of the current AI technology on the world is rather "medium".


ChatGPT feels like a more advanced Grammarly. In our company, they keep creating chatbots tailored to specific domains, but with poor data quality, the ROI remains low. Right now, it's mostly hype, and a true AI revolution seems years off. Even internally, executives are questioning how to measure ROI when they see the costs. I suspect the current hype cycle could lead to the downfall of some companies that focus too heavily on developing AI features for their stakeholders.


I barely know anyone who does.


I think you may have a bit of hindsight bias. The internet was not immediately recognized as useful. One of my first jobs was at an e-commerce startup that folded in the dot com bust because (1) investors did not believe that our target customers would favor online shopping over ordering from physical catalogs, or (2) foresee how online sales might impact business operations or the viability / scalability of business models - such as drop shipping.


You don't have to say "regarded" on HN, I don't think there's censorship like reddit.


Ha in this case that’s actually the word I meant but Reddit has basically made that an impossible word to use in its intended form


I don't understand.


That's because the main use case of AI is for businesses(once we solve hallucinations).


Except we will never solve that for LLMs. It's just how they work. LLM output will always be a "best guess"


I think that you are not appreciating how much of a quantum leap this is in technology. This is like a calculator for verbal reasoning. Humanity has little idea of what is even possible with something like that.

Also, I am old enough to remember - and have experienced in my professional career - a time when the mainstream opinion was that the internet was a tiny niche and that things like online shopping or dating would never gain widespread adoption.


I LOVE it when companies replace their standard customer service with AI. It makes it much easier to get ahold of a real person, faster, by just confusing the AI system until it falls back onto a human.


Since you’re saying “for a while” I presume you mean ML. ML systems have had an enormous impact on productivity. There’s a huge volume of decisions that get made instantly and with far greater precision using models That were done manually before.


I've worked with ML for a while and I wouldnt say that this is the case at all.

ML also didnt replace manual decision making. A lot of previously automated decision making was done with what were basically encoded rules of thumb which didnt overfit much worse than "more advanced" ML models did.


Your personal failure to use ML successfully has no bearing on its wider deployment and success. Overfitting strongly implies that you or whoever was doing it just didn’t know what they were doing.

ML is used everywhere.


Wow, salty :)

How many ML projects for large businesses have you been on?


Dozens. I’m a consultant and it’s what I do. The modeling is very easy. The operational changes are the hard part. Basically any random task of. A sort of maintenance, production, replacement, procurement or scheduling task can save 10-30% with a simple model. There’s a long way to go but this stuff is everywhere.

The world hasn’t even fully realized the productivity value of spreadsheets and email.


Email summarizers and such don't count as revolution, sorry.


where is billions of dollars in revenue coming in from for OpenAI.


The numbers aren't public, but it was widely reported earlier this year that their revenue is in the billions. I would assume this is mostly API use by businesses, not individual ChatGPT subscribers.


> Even with their current capabilities, these AI systems will dramatically improve productivity

It's been 24 months, where is it? And even what you can measure doesn't match the valuation of openai at all


You won’t necessarily notice it as much in end user application. But I already see good adoption for knowledge workers and the next step is in process automation through agentic workflows. Most likely the result will be in the form of efficiency gains, improved profits and worse customer service.


why hasn't all the productivity boost reported in qtrly results from corporates.


> Suddenly paper-based processes had to become electronic and in some cases that took 40 years.

"Took", past tense?


I think it's more like nuclear fission, frankly. I don't think the major breakthroughs will be as difficult to achieve, but we'll have a trail of nasty side effects and bad actors that will be left in its wake.

In 2020 I never thought an AI would code for me. And now it turns out, it's pretty good. But AI Slop is pretty much a toxic mess.


> AI is far more akin to nuclear fusion where each step along the way will require major scientific breakthroughs. And where it's not clear how to get this grand AGI end-game.

We have AI now. We've had it for ages. We just stop using that term for things once we've gotten them working.

We also already have artificial entities that are smarter and more capable than a lone human, and have for a long time. Bureaucracy and writing are incredibly powerful technologies, especially when combined.


Have you checked out arcprize.org? If so, what do you think about it? If not, you might find it interesting.


AI is different than statistics and advanced math though


>AI reminds me of the internet in 1999." -- Peter Thiel[1]

It just feels like we're still in the pre-Navigator days though, circa '94. All of this complex orchestration work needs to be baked into a single model. I don't want to learn how to implement a multimodal agent system. I want to tell something what to do and have it do it for me perfectly with no more interaction than a simple prompt. Fortunately this is just a software engineering problem now, not a CS one.


The UI is also trash for many workflows. Copy paste? Little comment based autocomplete? We can't do better then that at all?


Aider and Openinterpreter have a better UX than using ChatGPT via a website.


I agree completely. The companies I'm currently most excited about are all working on making some sort of E2E platform, weaving together all of the component pieces into a cohesive whole.


Wasn’t that followed by a dot-com bubble burst, and then things actually started taking shape.

So are we expecting a ai bubble burst as well?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: