Hacker Newsnew | past | comments | ask | show | jobs | submit | HWR_14's commentslogin

1 year seems aggressive. Successful restaurants have around the first year as the average break even timeline, with the vast majority between 6 and 18 months.

They are making a profit on each sale, but there are fixed costs to running a business.


1 year isn't aggressive because of the modifier "successful". Most businesses that aren't profitable 12 months in go out of business not long after, having remained unsuccessful throughout their lifespan.

Restaurants have comparatively high start up costs and ramp up time. Compare to e.g. a store selling clothes. If for successful restaurants the average time is already a year, then in general for successful businesses it's going to be less.


Isn't it split into a research/analysis and an operations division already?

Yes, a web browser should prioritize security and simplicity, and put optional features in a sandbox.

There is no way Elon could raise the 1.4 trillion to take Tesla private

Is he also talking about moving X's servers (since xAI owns X) into space?

The Germans have a new constitution and have kept the Nazis out of power with the new one.

So far. Between a quarter and a fifth of the country, however, currently votes for the Nazi party.

And a lot of people I know get a lot of value out of AI and spend $0. The question is how it compares when no longer being subsidized.

They'll be more capacity for those of us who do pay and maybe some price pressure too if we're lucky

I don't know any proposed laws that limit models. I only know of proposed laws that limit deployment of models.

The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.

If your employees reject black loan applicants because they're black, that's a winnable lawsuit. If they reject black loan applicants because it happens the black loan applicants had bad credit, not so much.

Why are we treating AI like something different? If it's given the race of the applicants and that causes it to reject black applicants, it's doing something objectionable. If it's given the race of the applicants but that doesn't significantly change its determinations, or it isn't given their race to begin with, it's not.

The trouble is people have come up with this ploy where they demand no racial disparity in outcomes even when there are non-racial factors (e.g. income, credit history) that correlate with race and innately result in a disparity.

A cynic would say that plaintiff lawyers don't like algorithms that reduce human bias because filing lawsuits over human bias is how they get paid.


Individuals can have standing, but they have to be directly harmed first. You don't have standing just because the law "SilverElfin loses all his constitutional rights and can be arrested for nothing" gets passed. You do have standing once you've been arrested.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: