Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week.
When we finally have some rules/laws that AI’s need to adhere to, then someday we also need to define what to do with AI’s that do not adhere to the laws?
Non of that is possible with FOSS AI code, if it’s out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.
Today’s existing AI’s are child’s play, but it’s not going to be like that for long.
One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.
We already have issue to stop people believing fake news in writing form. I don’t see how we can stop people believing well made fake news with audio and video.
Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.
Everything profit oriented will result in propagation of missinformation as long as it generates clicks.
Oh and don’t let AI control weapons, worst mistake one can make. We don’t even manage self driving cars, let alone a drone with mass killing weapons.
Punishment won’t reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I’m trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.
To block commercial products from allowing certain actions could be a start, but not a total fix.
Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.
AI is very broad, you can put everything with software into that topic too. Also it’s not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.
I’m pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.
We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save…
When we finally have some rules/laws that AI’s need to adhere to, then someday we also need to define what to do with AI’s that do not adhere to the laws?
Shoot them?
Delete them?
Put them in jail?
Forbid them to enter our country?
Take away their money?
Let them hang!
We unleash the wolves.
Non of that is possible with FOSS AI code, if it’s out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.
That is not enough. Think harder.
Today’s existing AI’s are child’s play, but it’s not going to be like that for long.
One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.
Maybe they could be handled like a virus or an exploit.
We already have issue to stop people believing fake news in writing form. I don’t see how we can stop people believing well made fake news with audio and video.
Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.
Everything profit oriented will result in propagation of missinformation as long as it generates clicks.
Oh and don’t let AI control weapons, worst mistake one can make. We don’t even manage self driving cars, let alone a drone with mass killing weapons.
Punishment won’t reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I’m trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.
To block commercial products from allowing certain actions could be a start, but not a total fix. Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.
AI is very broad, you can put everything with software into that topic too. Also it’s not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.
I’m pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.
We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save…
Posture and some sanctions