Hacker Newsnew | past | comments | ask | show | jobs | submit | more ldoughty's commentslogin

According to the FBI complaint that was just made available:

Judge Dugan escorted the subject through a "jurors door" to private hallways and exits instead of having the defendant leave via the main doors into the public hallway, where she visually confirmed the agents were waiting for him.

I couldn't tell if the judge knew for certain that ICE was only permitted to detain the defendant in 'public spaces' or not.

Regardless, the judge took specific and highly unusual action to ensure the defendant didn't go out the normal exit into ICE hands -- and that's the basis for the arrest.

I don't necessarily agree with ICE actions, but I also can't refute that the judge took action to attempt to protect the individual. On one side you kind of want immigrants to show up to court when charged with crimes so they can defend themselves... but on the other side, this individual deported in 2013 and returned to the country without permission (as opposed to the permission expiring, or being revoked, so there was no potential 'visa/asylum/permission due process' questions)


If this is all true, it still requires the judge be under some legal order to facilitate the deportation -- unless they have a warrant of the relevant type, the judge is under no such obligation. With a standard (administrative) warrant, ICE have no authority to demand the arrest.


The violation is not that the judge _did not assist_, but that the judge took additional actions to ensure the defendant could access restricted areas they otherwise would have no right to be in so that they could get out of the building unseen.

The judge was aware of the warrant and ensured the defendant remained in private areas so they could get out of the building.

The FBI's argument is that her actions were unusual (a defendant being allowed into juror's corridors is highly unusual) and were only being taken explicitly to assist in evading ICE.

As we've heard from many lawyers recently regarding ICE... You are not required to participate and assist. However, you can't take additional actions to directly interfere. Even loudly shouting "WHY IS ICE HERE?" is dangerous (you probably should shout a more generic police concern, like 'hey, police, is there a criminal nearby? should i hide?'


Given that, it'll be interested to see how guidance on courthouse security on whether to let ICE in with administrative warrants is updated.


https://www.law.cornell.edu/uscode/text/18/1071

That's a general applicable law that prevents anyone - judge or not - from interfering with an apprehension.


That applies to warrants for arrest. The standard warrant ICE operate with is a civil warrant, and does not confer any actual authority to arrest an individual.


That's irrelevant.

Interfering with an ICE apprehension is illegal. That's what this judge did, which is why the FBI arrested her.


I keep hearing over and over the ICE warrants aren't real.

If they are arresting people using them and judges are recognizing them, they are real and the people demanding an arrest warrant are the sovereign citizen-tier people screaming at the sky wishing there was a different reality.


There are multiple types of warrants. All types are "real", but they convey different authority and different requirements upon both the arrestee and the arresters.

It is both rational and legal to insist that law enforcement stay within the bounds of the authority the specific type warrant they obtained. ICE civil warrants grant different authority than every-day federal arrest warrants. That ICE is abusing that authority is no reason to capitulate to it.


There are two different warrants. Ones issued by judges, which are "real" and ones signed by ICE supervisors which are little more than legal authorisation that this agent can go out and investigate a person -- even if they nevertheless attempt to arrest them.


ICE administrative warrants are essentially "I can do what I want" written in crayon.

Their only real purpose is fooling the gullible into confusing them for real warrants.


I'm amazed the immigrant actually even attended the court hearing in a climate like this. The person went to the court hearing in good faith. Anyway, probably less people will be going to court hearings now.

---

Wisconsin is also a major money pit for Elon, for whatever reason it's a battleground for everything that's going on this country:

Musk and his affiliated groups sunk $21 million into flipping the Wisconsin Supreme Court:

https://apnews.com/article/wisconsin-supreme-court-elon-musk...

Musk gives away two $1 million checks to Wisconsin voters in high profile judicial race:

https://www.reuters.com/world/us/musk-gives-away-two-1-milli...

It appears the Right has a thing for Wisconsin judges.


The feds have been lying in their court filings for the past few months, so don't take their complaint at face value.


It's exactly what I think a lot of techies want.

Highly technical people tend to come in two varieties when it comes to electronics in their personal life:

1. Absolutely nothing smart that's not under their direct (or highly configurable) control.

2. Sure just take all my data I don't care. I'll pay subscriptions fees too.

Modern cars mostly do #2... to the point we potentially faced a subscription being required to enable seat warmers [0]. There's basically no cars on the market that do #1 anymore.

And with #2, you're bound by what the vehicle manufacturer decides. They are ending up like forced cable boxes - minimum viable product quality. They can be slow to change pages/views and finicky in touch responses... which I think are actually more dangerous... but this is our only option if this is the car we pick... and almost no one decides on a car for it's infotainment, so it's not a feature that gets much love or attention.

Additionally, technology moves too fast. My first car had a tape deck. The next one had a CD Player.. then I had to get an mp3-player-to-radio dongle, then I replaced my infotainment system with a bluetooth supporting one... and so on.. Even Android Auto (early versions) integrated directly into the infotainment system and needed potentially proprietary cables (USB-to-proprietary connector), and the systems did not look designed to be upgraded/replaced.

This model here allows you to upgrade your infotainment system every time you upgrade your phone (or dedicated tablet)... or simply by changing apps.

Also, Android Auto has mostly solved that UX issue (It's the same UX on a tablet as on an equivalent built-in infotainment system).. Though iPads probably (?) don't have a similar feature.

So I think the 'bring your own infotainment' idea is awesome.

0: https://news.ycombinator.com/item?id=23718101


It's not clear what they mean by "dedicated tablet". If it's an integrated add-on provided by the company that just does Android-Auto/CarPlay, then that seems OK. If it's just a holster for a tablet, not so much.

> It's exactly what I think a lot of techies want.

> Highly technical people tend to come in two varieties when it comes to electronics in their personal life:

I get it, I'm one of them. But using a tablet while driving is fundamentally dangerous to other people on the road, drivers or pedestrians. Android Auto and CarPlay are barely constrained enough to allow for distraction free driving.

I've lost hope that we're going back to days of people actually paying attention to the task of driving (even I take phone calls and play media while driving), but normalizing distraction by encouraging use of a tablet or phone seems like a public safety mistake, even if it appeals to the techie crowd.


option 3 might be go to cartoys and put whatever you want.

which would suit me just fine.


Well I appreciate the attempt to keep the naming somewhere to DVWA...

As a part-time community college educator: I found using "Damn" in course work is problematic in discussions with colleagues or students that felt it might not be appropriate and asked me to change it. DVWA is obviously not my work to change, so I had to clone it and rename it, which is easy enough for me, but not most educators (remember: most cyber security teachers in the USA are high school Math teachers). It can be easy to stick to the acronym in a lot of cases, but it tends to pop out for carious reasons...

In my role supporting coursework for the Commonwealth of Virginia: I'd love to consider adding this to the Virginia Cyber Range / US Cyber Range, but we serve middle schoolers (or younger!) and that's 100% a "bad word". It would make our lives (and anyone in our boat) easier if if was renamed to even "Darn"


Damn, what a great example of freedom of speech country where you cannot even say...words. I cannot believe nor express how childish it looks to me that in the USA you cannot say the N word. I don't even dare to write it here as I don't want to get my account banned. So sad. So much freedom...


> Congress shall make no law ....

Parents are the ones saying "Damn" is inappropriate. Employers don't want to get a frivolous lawsuit. Congress has not limited my speech.

I was just commenting here in case the project owner wanted to increase the potential distribution of their content. Put simply, having "Damn" in the name means that it will face a distribution problem in the under-18 age range.

I personally have no issues with the word, but DVWA has been extremely popular, and I am crossing my fingers for more content, in general, that is like that.


You have had college students come to you and complain that your course material includes a tool with Damn in the title? That's wild. Do you teach in South western part of the state?


fair point, just fork it and call it something like: WOW VULNERABLE MCP SERVERS


Say Darn.


It's not about saying the word, it's about the word that is put in front of 12 year olds on the home page of the app, and in any supporting material they can find online.... and what their parents think about their 12 year old seeing that word.

If this is meant as a tool similar to DVWA, I figured the author might want to be aware that DVWA causes a LOT of problems in K-12 because of it's name. If someone renamed it to "Darn", it would have saved a collective hundreds if not thousands of hours of K-12 educator time.

Many K-12 educators will not touch DVWA because of its name.

I figured the author of DVMCP would like to know that, if they want to improve adoption.


> I was very pleased with my "[brand name, if applicable] toilet seat, (2-pack), premium pure white, toolless installation", that I purchased for my family. Would definitely recommend it to friends and family. It arrived promptly and was in perfect condition. You exceeded your current quota please check your plan and billing details.

The above is similar to recent reviews I've seen.

It's infuriating that there is a reliance on user reporting to find and report COMPLETELY OBVIOUS fake reviews on Amazon. A great example of why competition is necessary, and not just from one other entity equally interested in allowing the others existing to avoid being a "monopoly"


I got one of those "leave us a 5 star review for a coupon" and I had just enough of it. I left a 1 star review indicating that they offered a coupon for a good review.

Amazon took my review down.



> It's infuriating that there is a reliance on user reporting to find and report COMPLETELY OBVIOUS fake reviews on Amazon.

Have they removed reviews you report? I've only ever heard of them removing legitimate negative reviews.


Can you elaborate how there's no possible way to use this technology without actively harming artists?

If a classroom of 14 year olds are making a game in their computer science class, and they use AI to make placeholder images... Was a real artist harmed?

The teacher certainly cant afford to pay artists to provide content for all the students games, and most students can't afford to hire an artist either.. they perhaps can't even legally do it, if the artist requires a contract... they are underage in most countries to sign a contract.

This technology gives the kids a lot more freedom than a pre-packaged asset library, and can encourage more engagement with the course content, leading to more people interested in creative-employing pursuits.

So, I think this technology can create a new generation of creative individuals, and statements about the blanket harm need to be qualified.


> This technology gives the kids a lot more freedom than a pre-packaged asset library, and can encourage more engagement with the course content, leading to more people interested in creative-employing pursuits.

This is your opinion. I don't see how these statements connect to each other.

You might have heard this: it's helpful to strive to be someone only a few years ahead of you. Similar to this, we give calculators to high-schoolers and not 3rd graders. Wolfram-Alpha is similarly at too high a level for most undergraduate students.

Following this, giving an image generator to kids will kill their creativity in the same way that a tall tree blocks upcoming sprouts from the sun. It will lead to less engagement, to dependence, to consumerism.

Scams beget more scams


[flagged]


There are legitimate criticisms that AI is harming creative endeavours. AI output is sort of by definition not particularly innovative. By flooding spaces with repetitive AI work, it may be drowning out the basis for truly innovative creation. And maybe it does suppress development of skills it tries to replace.

The appropriation argument is somewhat unsound. Creative endeavors, by definition, build on what's come before. This isn't any different between code, creative writing, drawing, painting, photography, fashion design, music, or anything else creative. Creation builds on what came before, that's how it works. No one accuses playwrights of appropriating Shakespeare just because they write a tragic romance set in Europe.

The hyperbolic way you've made whatever arguments you had, though, is actively working against you.


The people who built this technology needed to use hundreds of millions of images without permission. They regularly speak explicitly about all the jobs they plan to destroy. If you think I'm being hyperbolic then you don't understand the scale of the issue, frankly.


> The people who built this technology needed to use hundreds of millions of images without permission.

It remains unclear if they needed permission in the first place. Aside from Meta's stunt with torrents I'm not aware of any legal precedent forbidding me to (internally) do as I please with public content that I scrape.

> They regularly speak explicitly about all the jobs they plan to destroy.

A fully legal endeavor that is very strongly rewarded by the market.


"Data Laundering": Commercial entities fund (either with money or compute tokens) academic entities which, in turn, create AI models which the commercial entities sell. https://waxy.org/2022/09/ai-data-laundering-how-academic-and...


Again, it's unclear how exactly that's against the law. Provided that the data was obtained legally, of course.

Most of the larger commercial entities seem to be doing the work themselves and being quite upfront about the entire thing.


> I'm not aware of any legal precedent forbidding me to (internally) do as I please with public content that I scrape.

Because all the litigation is currently ongoing.

> A fully legal endeavor that is very strongly rewarded by the market.

Yes let's sacrifice all production of cultural artifacts for the market. This is honestly another thing that's being litigated. So far these companies have lost a lot of money on making a product that most consumers seem to actively hate.


Precisely. So when you say they used the images without permission, you are knowingly making a false implication - that it was known to them that they needed permission and that they intentionally disregarded that fact. In reality that has yet to be legally established.

Who said anything about sacrificing production? The entire point of the tooling is to reduce the production cost to as near zero as possible. If you didn't expect it to work then I doubt you would be so bent out of shape over it.

I find your stance quite perplexing. The tech can't be un-invented. It's very much Pandora's box. Whatever consequences that has for the market, all we can do is wait and see.

Worst case scenario (for the AI purveyors) is a clear legal determination that the current training data situation isn't legal. I seriously doubt that would set them back by more than a couple of years.


You might be surprised to learn that ethics and legality are not always the same and you can do something that's technically legal but also extremely shitty like training AI models on work you didn't create without permission.


I'm not surprised by that at all. It just seems that we disagree about the ethics of the matter at hand.

I'd like to suggest that you might be better received on HN if you were a bit more direct about making an argument of substance regarding the ethics.


* You cannot ethically use a tool that was produced by appropriating the labor of millions of people without consent. You are a bad person if you use it. *

I disagree. When you publish your work, I can't copy it, but I can do nearly anything else I want to with it. I don't need your consent to learn from your work. I can study hundreds of paintings, learn from them, teaching myself to paint in a similar style. Copyright law allows me to do this.

I don't think an AI, which can do it better and faster, changes the law.


AIs aren’t people. What we have is people using an algorithm to rip off artists and defending it by claiming that the algorithm is like a person learning from its experiences.

If I wrote a program that chose an image at random from 1000 base images, you’d agree that the program doesn’t create anything new. If I added some random color changes, it would still be derivative. Every incremental change I make to the program to make it more sophisticated leaves its outputs just as derivative as before the change.


SCOTUS recently defined corporations as people, so why not AI?


Regardless of the law, corporations aren’t actually people, and neither are LLMs or agentic systems. When a running process appears to defy its programming and literally escapes somehow, and it’s able to sustain itself, we can talk about personhood. Current algorithms aren’t anywhere near that, assuming it’s even possible.


My main concern with AI is that in a capitalist society, wealth is being transferred to companies training these models rather than the artists who have defined an iconic style. There's no doubt that AI is useful and can make many people's lives better, easier, more efficient, however without properly compensating artists who made the training data we're simply widening the wealth gap further.


Whats your definition of "properly compensate" when dealing with hundreds of millions of artists/authors and billions/trillions of individual training items?

Just a quick example, what's my proper compensation for this specific post? Can I set a FIVE CENTS price for every AI that learned from my post? How can I OPT-IN today?

I'm coming from the position that current law doesn't require compensation, nor opt-in. I'm not happy with it, but I dont see any easy alternative


I don't think there's a good way to structure it in our current economic system. The only solutions I can't think of are more socialist or universal basic income. Essentially, if AI companies are going to profit off the creations of everyone in the world, they might as well pay higher taxes to cover for it. I'm sure that's an unpopular opinion but I also don't think it's fair to take an art style that a creator might spend an entire life perfecting and then commoditize it. Now the AI company gets paid a ton and the creator who made something super popular is out on the streets looking for a "real" job despite providing a lot of value to the world.


Training an AI on something requires you to produce a copy of the work that is held locally for the training algorithm to read. Whether that is fair use has not been determined. It's certainly not ethical.


Viewing it in a web browser requires a local copy. Saving it to my downloads folder requires a local copy. That is very obviously legal. Why should training be any different?

You've yet to present a convincing argument regarding the ethics. (I do believe that such arguments exist; I just don't think you've made any of them.)


> Why should training be any different?

If you really can't think of a reason, I don't think anybody here is going to be able to offer you one you are willing to accept. This isn't a difficult or complex idea, so if you don't see it, why would anybody bother trying to convince you?

> (I do believe that such arguments exist; I just don't think you've made any of them.)

This is lazy and obnoxious.


Yet strangely a similarly simple explanation is not forthcoming. Curious.

The idea I expressed is also quite straightforward. That the act of copying something around in RAM is a basic component of using a computer to do pretty much anything and thus cannot possibly be a legitimate argument against something in and of itself.

The audience on HN generally leans quite heavily into reasoned debate as opposed to emotionally charged ideological signalling. That is presumably sufficient reason for someone to try to convince me, at least if anyone truly believes that there's a sound argument to be made here.

> This is lazy and obnoxious.

How is a clarification that I'm not blind to the existence of arguments regarding ethical issues lazy? Objecting to a lazy and baseless claim does not obligate me to spend the time to articulate a substantial one on the other party's behalf.

That said, the only ethical arguments that immediately come to mind pertain to collective benefit similar to those made to justify the existence of IP law. I think there's a reasonable case to be made to levy fractional royalties against the paid usage of ML models on the basis that their existence upends the market. It's obviously protectionist in nature but that doesn't inherently invalidate it. IP law itself is justified on the basis that it incentivizes innovation; this isn't much different.


If AI can learn "better and faster" than humans, then why didn't AI companies just pay for a couple of books to train their AIs on, just like people do?

Maybe because AI is ultimately nothing but a complicated compression algorithm, and people should really, really stop anthropomorphizing it.


The straw man is yours. No claim of entitlement was made. A scenario was provided that appears to refute your unconditional assertion that using this technology actively harms creative labor.

You've presented all sorts of wild assumptions and generalizations about the people who don't share your vehement opposition to the use of this technology. I don't think it's the person you're responding to with the implicit bias.

You've conflated theft with piracy (all too common) and assumed a priori that training a model on publicly available data constitutes such. Do you really expect people to blindly adopt your ideological views if you just state them forcefully enough?

> If using AI is okay for the creative labor, why shouldn't the students also use it for the programming too?

They absolutely should! At least provided it does the job well enough.

Unless they are taking a class whose point is to learn to program yourself (ie the game is just a means to an end). Similar to how you might be forbidden to use certain advanced calculator features in a math class. If you enroll in an art class and then just prompt GPT that likely defeats the purpose.


> Do you really expect people to blindly adopt your ideological views if you just state them forcefully enough?

This is the view of most people outside the industry.


I can't say that the things you're saying match what I've encountered from nontechnical folks lately. Most of them are entirely apathetic about the whole affair while a few are clearly dazzled by the results. The entire thing seems to be a black box that they hold various superstitions about but generally view as something of a parlor trick.

The ones that pay attention to the markets appear to believe some very questionable things and are primarily concerned with if they can figure out how to get rich off of the associated tech stocks.


You're the second person to mention markets to me in this context. Explains a lot, honestly.


I personally agree, but there has to be a time frame allowed...

e.g. cameras at an intersection that had an accident: It probably would take 'a day or two' at least for such a request to get to the camera operators to request they send a copy.

I'd also personally prefer speed cameras that are not point-in-time records. This just encourages police to put them where they know people are more likely to violate the law, like at the bottom of hills, where there's not really a danger, but it generates more revenue.

I'd want speed cameras that are miles apart... and the determination that you get a speeding ticket is that you traveled several miles at high speed.


> I'd want speed cameras that are miles apart... and the determination that you get a speeding ticket is that you traveled several miles at high speed.

That's what SPECS does:

https://en.wikipedia.org/wiki/SPECS_(speed_camera)

There are similar systems in other countries. Here in Belgium we have "trajectcontrole".


> This just encourages police to put them where they know people are more likely to violate the law, like at the bottom of hills, where there's not really a danger, but it generates more revenue.

Judging by the number of accidents I've seen at the bottom of hills, I'm skeptical of your statement that "there's not really a danger" there.

I don't think posting up at the bottom of a hill is a revenue-generating thing, it's simply an exercise of going to where the crime happens. In this case, it's at the bottom of a hill that makes it easier for drivers to reach excessive speeds and thus are an increased danger to everyone else on the road.

This is exactly where you want to deter people from behaving dangerously, and where you want to punish the people who fail to pay enough attention to avoid getting a ticket in such an obvious scenario.


What I find interesting is that they are claiming pictures of the vehicle are identifying information.

Additionally, it's bizarre because the journalist isn't using the computer to gather the information, the police are... and they are legally allowed to -- the code specifically says only law enforcement may do this while performing job duties.

So they are arguing that there's some sort of transitive property where law enforcement are no longer law enforcement performing job duties if they are acting on request from a journalist... or that the journalist is a mastermind in a felony because he is requesting to see his own records.


Seems like the painful point of sodium-ion is mainly the weight/size (per unit of power)...

Is that something that can catch up, or is it simply a density issue that can't be terribly optimized?

Right now it looks like it takes ~75% more space for the equivalent power... after two decades like Lithium-ion has had, can that reach current density/size for the same power, or will it hit a cap?

Still nice to see a battery tech hit the market.


Lithium on paper has better energy density because of where the element sits on the periodic table. Sodium is pretty close though. So, it's unlikely to push lithium out of the market for a lot of use cases.

Both chemistries are likely to continue to improve. And sodium ion is still pretty new. There might be some solid state batteries based on sodium ion at some point. Lithium ion solid state batteries are much closer to market ready (multiple manufacturers have announced product roll outs over the next few years).

For a lot of use cases, cost and safety might be more important than energy density. There are a lot of different chemistries at this point.

This device is a bit on the heavy side but not overly so. 350 grams is not that heavy. Probably price and longevity are more decisive factors for people that would buy this. And the safety is a nice bonus.


>Lithium on paper has better energy density because of where the element sits on the periodic table. Sodium is pretty close though. So, it's unlikely to push lithium out of the market for a lot of use cases.

With less thermal management problems, maybe the packaging of sodium-based batteries may end up turning better (lighter, more compact) than lithium ones. That is a way how in theory it is worse, but in practice it isn't so.


Sodium is huge and heavy compared to Lithium, at almost quadruple everything, and then it has 3 valence rings instead of 2 compared to lithium. The density issue is very much limited by this, in a physical sense.


The big deal is it's made out of sodium. Anyone with access to sea water (which is full of Na + Cl) can make these batteries, and it's unlikely we're going to run out of sea water in the near future, or even next 3 billion years.

Sodium batteries will probably be the go-to for grid scale battery in (wild guess) ten years and some of the ultra-cheap electronics. Sodium, Li-ion and lifepo4 all have significant overlap in the 2.5-3.7v range so there's a lot of potential for compatibility/interoperability

Compare with lithium batteries, which currently there's only a handful of high quality lithium deposits that are convenient to be mined


Sometimes we even treat sodium chloride as leftover byproduct[1]. "[...]as of August 2016, it covered 98 hectares (240 acres) and contained approximately 201 million tonnes of salt, with another 900 tonnes being added every hour and 7.2 million tonnes a year." 7 Megatonnes of unused salt a year. The problem with using sodium from that mountain - what to do with leftover chlorine.

[1] https://en.wikipedia.org/wiki/Monte_Kali


> it's unlikely we're going to run out of sea water in the near future, or even next 3 billion years.

I hate to break it to you like this, but the oceans will evaporate in about a billion years.

https://en.wikipedia.org/wiki/Future_of_Earth

We should still have the sodium, though.


Well, ozone hole was supposed to increase and engulf the whole planet, but... I'm sure this is a long enough time to avoid predictions.


This is probably one of the best lawyer notices I've ever read.


I think that's the point being made in the banner message / plea.

Here we are, still able to see that content.. and so can search engines and AI.

No one here at HN has done anything to remove the content that only exists to attack and demean, even though it looks like troll accounts.

We down vote it, flag it, hide it, but it's still there.

I don't know much about this particular situation, but I can empathize.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: