Since the license ban the use and installation in EU, I would ask: It is possible to formulate a license that claims: "The restriction A is motivated to protect our ass but we will not directly or indirectly enforce it against you"?, Such kind of phrasing in the license could be categorized or called "isolating clause" but I don't know if judges could consider it a circumvention of the law.
Edited several times, I should add: IANAL, but this sounds similar to meta releasing llama weights. I think that the spirit of the European law is to control concrete uses of AI and not a broad distribution of weights and architecture. So my question is: Does the EU AI act ban this distribution?, I think it provides more competition and options for Europeans.
Edited: Thinking a little more, installing open weights could allow backdoors (in the form of a way to manipulate intelligent agents via specials prompts designated to control the system), so perhaps from a national security point of view some care should be taken (but I personally hate that). So another question: Is there a way to control if open weights can create back doors (via prompt injection)?, I recall a paper in which prompt by symbols like 0?,#2! could put the system in a state in which one can read information that the LLM is asked to hide (that is a well known attack available to those that know the weights).
Another question: Is fine tuning or Lora a way to eliminate o amilliorate such prompt attacks?, is there any python library to defend against such attacks. Download - install - modify by fine tune or lora - now you are protected.
It's not up to Huawei to tell EU citizens what to do. In fact they did not need to add this restriction to their license at all. As EU citizens we shoud know the laws of the land and protect ourselves by avoiding using these models like the plague.
IANAL but the EU legislation is very broad about what it covers e.g.
"AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union."
I don't really understand the limits of it's scope e.g. the difference between making a system available vs. controlling how it's used is not clear to me. I don't think you can escape the regulation of high-risk uses by offering a "general purpose" AI with no controls on how it's used.
In terms of the open-source nature - I can see it being treated like giving away any other regulated product e.g. medication, cars, safety equipment etc. The lack of cost won't transfer the liability from the supplier to the consumer.
> for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union.
> this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union
Otherwise it seems to reach way beyond what it actually is.
Explicitly prohibiting EU usage in the license is probably a move to reduce liability under the eyes of those “used in the Union” clauses.
It would be a misreading to think that example implies a more limited scope. The passage as a whole is pretty clear why they are they so broad: in order to avoid circumvention. I can understand why - it seems to be both a necessary yet unacceptable way to write laws!
The passage continues:
"To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union."
An AI would come under this regulation even it's just the outputs that are used in the EU. Interesting to think about what that could lead to.
The EU can claim whatever it wants (much like the US does at times) but in reality only those doing business within the EU markets fall under their legal jurisdiction. Which I assume is exactly why that clause is in the license - to protect the ability to do business within the EU in the future without unexpectedly suffering liability for their public AI research.
Conversely, I as an individual don't need to worry about it since I don't live there (similar stories for various other overly broad laws).
I agree with you that the usefulness of that clause is suspect given how broad the wording of that law is. How do other companies publishing open models deal with this? For example Meta.
Thanks for all that information, I agree with you that the EU legislation is very broad. In my opinion, this justifies or motivates the inclusion of the ban in the EU.
What happens if you ignore overly broad EU regulations? Does your home country observe the EU’s violation of its sovereignty and throw you in a home country jail? Does Brussels throw you in an EU jail? What country hosts the EU jail?
Not to sound too snarky (just a little snarky), I’m just curious how it all works.
Realistically lots of multi-national companies have an EU presence, so concerns about “violating sovereignty” are sorta moot. Huawei probably wants to do business in the EU which requires following EU law.
As an aside: in general a sovereign country can do whatever they want in their own territory, this includes the right of the country to bind itself to treaties. So in your hypothetical,
> Does your home country observe the EU’s violation of its sovereignty and throw you in a home country jail?
This doesn’t look like a violation of sovereignty to me; the non-EU has decided to enforce an EU law. Why? I don’t know, maybe it makes business easier for the multinational companies of the non-EU country.
Countries can also do things like apply secondary sanctions to an entity. So, again hypothetically, the EU doesn’t need to be able to enforce a ruling against you. They can make you toxic to anyone who wants to do business in the EU.
As always, EU and USA courts will act as if they had jurisdiction over the rest of the world and motivate bans and similar measures against other countries.
"Protect" ourselves against whom? I'm a EU citizen (unfortunately), and I'm fully on board with China against Brussels. Which is to say, don't try to speak for everyone in this God-forsaken so-called union.
They'll blame it on AI from now on. This could have serious implications and further erosion of any responsibility tech companies have will probably accelerate.
Somewhat anecdotally this year the nVidia GPU drivers have had a lot of issues. It made me wonder how much of their own second degree dogfood they're applying to creating those and maybe that's to blame for the state of those drivers recently.
> Since the license ban the use and installation in EU, I would ask: It is possible to formulate a license that claims: "The restriction A is motivated to protect our ass but we will not directly or indirectly enforce it against you"?, Such kind of phrasing in the license could be categorized or called "isolating clause" but I don't know if judges could consider it a circumvention of the law.
Maybe not the exact thing you're talking about, but that description reminds me of the Alliance for Open Media -- their codec licenses are royalty-free, but the same terms revoke your usage rights if you sue anyone for the use of these formats.
Edited several times, I should add: IANAL, but this sounds similar to meta releasing llama weights. I think that the spirit of the European law is to control concrete uses of AI and not a broad distribution of weights and architecture. So my question is: Does the EU AI act ban this distribution?, I think it provides more competition and options for Europeans.
Edited: Thinking a little more, installing open weights could allow backdoors (in the form of a way to manipulate intelligent agents via specials prompts designated to control the system), so perhaps from a national security point of view some care should be taken (but I personally hate that). So another question: Is there a way to control if open weights can create back doors (via prompt injection)?, I recall a paper in which prompt by symbols like 0?,#2! could put the system in a state in which one can read information that the LLM is asked to hide (that is a well known attack available to those that know the weights).
Another question: Is fine tuning or Lora a way to eliminate o amilliorate such prompt attacks?, is there any python library to defend against such attacks. Download - install - modify by fine tune or lora - now you are protected.