|
|
|
|
Gerade random auf Twitter gesehen, das Adobe PR Video ist fake. Benutzt echte Fotos prägenerierte Bilder aus MJ oder SD und behauptet diese seien ki generiert. Das original:
|
[Dieser Beitrag wurde 1 mal editiert; zum letzten Mal von Poliadversum am 29.05.2023 18:29]
|
|
|
|
|
|
Das sieht auch generiert (lies: fake) aus. Die freepik UI für ein einzelnes Bild sieht anders aus.
Die URL ist direkt verdächtig, da kein Bildlink drin steht.
|
|
|
|
|
|
|
Stimmt, hatte es nur am Handy angeguckt. Es sieht sehr midjourneyig aus. mal sehen ob ich das „original“ finde.
|
|
|
|
|
|
|
|
|
|
|
Wir erinnern uns an die Aussage, Chat-GPT könnte Bar-Exams (mehr oder weniger äquivalent zum 2. juristischen Staatsexamen bei uns) besser beantworten als 90% der Kandidaten?
Die Zahlen hat man wohl heftig massiert um Hype zu generieren.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4441311
|
|
|
|
|
|
|
https://www.rollingstone.com/culture/culture-features/true-crime-tiktok-ai-deepfake-victims-children-1234743895/
|
“Grandma locked me in an oven at 230 degrees when I was just 21 months old,” the cherubic baby with giant blue eyes and a floral headband says in the TikTok video. The baby, who speaks in an adorably childish voice atop the plaintive melody of Dylan Mathew‘s “Love Is Gone,” identifies herself as Rody Marie Floyd, a little girl who lived with her mother and grandmother in Mississippi. She recounts that one day, she was hungry and wouldn’t stop crying, prompting her grandmother to put her in the oven, leading to her death. “Please follow me so more people know my true story,” the baby says at the end of the video.
The baby in the video is, of course, not real: She’s an AI-generated creation posted on @truestorynow, an account with nearly 50,000 followers that posts videos of real-life crime victims telling their stories. The gruesome story she’s telling is true, albeit to a point. The baby’s name wasn’t Rody Marie, but Royalty Marie, and she was found stabbed to death and burned in an oven in her grandmother’s home in Mississippi in 2018; the grandmother, 48-year-old Carolyn Jones, was charged with first-degree murder earlier this year. But Royalty was 20 months when she died, not 21, and unlike the baby in the TikTok video, she was Black, not white.
Such inaccuracies are par for the course in the grotesque world of AI true-crime-victim TikTok, a subgenre of the massive true-crime fandom, which uses artificial intelligence to essentially resurrect murder victims, many of whom are young children. The videos, some of which have millions of views, involve a victim speaking in first person about the gruesome details of their deaths; most of them do not have a content warning beforehand.
“They’re quite strange and creepy,” says Paul Bleakley, assistant professor in criminal justice at the University of New Haven. “They seem designed to trigger strong emotional reactions, because it’s the surest-fire way to get clicks and likes. It’s uncomfortable to watch, but I think that might be the point.”
| |
| The proliferation of these AI true-crime victim videos on TikTok is the latest ethical question to be raised by the immense popularity of the true-crime genre in general. Though documentaries like The Jinx and Making a Murderer and podcasts like Crime Junkie and My Favorite Murder have garnered immense cult followings, many critics of the genre have questioned the ethical implications of audiences consuming the real-life stories of horrific assaults and murders as pure entertainment, with the rise of armchair sleuths and true-crime obsessives potentially retraumatizing loved ones of victims.
That concern applies doubly to videos like the one featuring Royalty, which tell a victim’s story from their perspective and using their name, presumably without the family’s consent, to incredibly creepy effect. “Something like this has real potential to revictimize people who have been victimized before,” says Bleakley. “Imagine being the parent or relative of one of these kids in these AI videos. You go online, and in this strange, high-pitched voice, here’s an AI image [based on] your deceased child, going into very gory detail about what happened to them.” | |
| One thing is clear, however: With AI technology rapidly evolving every day, and little to no regulation in place to curb its spread, the question is not whether videos like these will become more popular, but rather, how much worse the marriage of true crime and AI is going to get. One can easily imagine, true-crime creators being able to not only re-create the voices of murder “victims,” but to re-create the gory details of crimes as well. “This is always the question with any new technological development,” says Bleakley. “Where is it going to stop?” | |
Was für ekelhafte Wichser. Alles für den Content.
|
[Dieser Beitrag wurde 3 mal editiert; zum letzten Mal von loliger_rofler am 31.05.2023 21:11]
|
|
|
|
|
|
True Crime war eh schon Abfall, aber das ist nochmal schlimmer
|
|
|
|
|
|
|
Highlights from the Royal Aeronautical Society Future Combat Air & Space Capabilities Summit
| As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT.
Perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, U.S. Air Force, who provided an insight into the benefits and hazards in more autonomous weapon systems. ... Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
| |
|
[Dieser Beitrag wurde 2 mal editiert; zum letzten Mal von Herr der Lage am 02.06.2023 0:26]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Nach der Werbung geht's weiter...
|
Kleine Vorwarnung: Wer stark von "Bodyhorror" getriggert wird, sollte sich das vielleicht nicht angucken. Im Grunde ist es "verschmelzend-verstörend" und bizarr.
|
|
|
|
|
|
Und nun weiter im Programm:
|
Chris Rock und Will Smith haben sich endlich wieder vertragen! Oder geht da mehr? Mata am Mittwoch hat Exklusivbilder, die Sie schockieren könnten. Mehr auf Seite 12!
|
|
|
|
|
|
|
|
|
|
|
Weil manchmal diskutiert wird, dass generative AI zwar Kunstschaffende schädigen kann, aber Algorithmen prinzipiell hilfreich sind:
(Artikel von 2022)
https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/
| Chermaine Leysner’s life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over ¤100,000.
“I thought, ‘Don’t worry, this is a big mistake.’ But it wasn’t a mistake. It was the start of something big,” she said.
The ordeal took nine years of Leysner’s life. The stress caused by the tax bill and her mother’s cancer diagnosis drove Leysner into depression and burnout. She ended up separating from her children’s father. “I was working like crazy so I could still do something for my children like give them some nice things to eat or buy candy. But I had times that my little boy had to go to school with a hole in his shoe,” Leysner said.
Leysner is one of the tens of thousands of victims of what the Dutch have dubbed the “toeslagenaffaire,” or the child care benefits scandal.
In 2019 it was revealed that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud.
Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care.
The Dutch tax authorities now face a new ¤3.7 million fine from the country's privacy regulator. In a statement released April 12, the agency outlined several violations of the EU's data protection rulebook, the General Data Protection Regulation, including not having a legal basis to process people's data and hanging on to the information for too long. | |
| The Dutch system — which was launched in 2013 — was intended to weed out benefits fraud at an early stage. The criteria for the risk profile were developed by the tax authority, reports Dutch newspaper Trouw. Having dual nationality was marked as a big risk indicator, as was a low income.
Why Leysner ended up in the situation is unclear. One reason could be that she had twins, which meant she needed more support from the government. Leysner, who was born in the Netherlands, also has Surinamese roots.
In 2020, Trouw and another Dutch news outlet, RTL Nieuws revealed that the tax authorities also kept secret blacklists of people for two decades, which tracked both credible and unsubstantiated “signals” of potential fraud. Citizens had no way of finding out why they were on the list or defending themselves. | |
|
Europe’s top digital official, European Commission Executive Vice President Margrethe Vestager, said the Dutch scandal is exactly what every government should be scared of.
“We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be really useful, if you trust it,” Vestager told the European Parliament in March. The EU’s new AI Act is aimed at creating that trust, she argued, “so that this big public sector market will be open also for artificial intelligence.”
The Commission’s proposal for the AI Act restricts the use of so-called high-risk AI systems and bans certain “unacceptable” uses. Companies providing high-risk AI systems have to meet certain EU requirements. The AI Act also creates a public EU register of such systems in an effort to improve transparency and help with enforcement.
That’s not good enough, argues Renske Leijten, a Socialist member of the Dutch parliament and another key politician who helped uncover the true scale of the scandal. Leijten argues that the AI Act should also apply to those using high-risk AI systems in both the private and public sectors.
In the AI Act, “we see that there are more guarantees for your rights when companies and private enterprises are working with AI. But the important thing we must learn out of the child care benefit scandal is that this was not an enterprise or private sector … This was the government,” she said.
As it is now, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament’s AI Act negotiating team on the internal market committee. Van Sparrentak is pushing for the AI Act to have fundamental rights impact assessments that will also be published in the EU’s AI register. Parliament is also proposing adding obligations to the users of high-risk AI systems, including in the public sector.
“Fraud prediction and predictive policing based on profiling should just be banned. Because we have seen only very bad outcomes and not a single person can be determined based on some of their data,” van Sparrentak said.
In a report detailing how the Dutch government used ethnic profiling in the child care benefits scandal, Amnesty International calls on governments to ban the “use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects.”
The Netherlands is still reckoning with the aftermath of the scandal. The government has promised to pay back victims of the incident ¤30,000. But for those like Leysner, that doesn't even begin to cover the years she lost — justice seems like a long way off.
“If you go through things like this, you also lose your trust in the government. So it's very difficult to trust what [authorities] say right now,” Leysner said.
| |
Ich freue mich schon auf die KI-Reform im Finanzministerium.
|
|
|
|
|
|
|
| Zitat von Herr der Lage
Highlights from the Royal Aeronautical Society Future Combat Air & Space Capabilities Summit
|
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
| |
| |
Ist leider "falsch zitiert" bzw. ausgedacht.
| But in a statement to Insider, the US air force spokesperson Ann Stefanek denied any such simulation had taken place.
The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology, Stefanek said. It appears the colonels comments were taken out of context and were meant to be anecdotal. | |
Quelle: https://amp.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
|
|
|
|
|
|
|
| Zitat von loliger_rofler
Weil manchmal diskutiert wird, dass generative AI zwar Kunstschaffende schädigen kann, aber Algorithmen prinzipiell hilfreich sind:
| |
Und was genau hat dieses 10 Jahre alte idioten System jetzt mit KI zu tun? Das waren von Menschen gesetzte Kriterien, die nur automatisiert gecheckt wurden.
KI hat sinnvolle Anwendungszwecke, wenn es um pattern recognition geht, man braucht halt ein vernünftiges Modell und gute Trainingsdaten.
|
|
|
|
|
|
|
| Zitat von flowb
KI hat sinnvolle Anwendungszwecke, wenn es um pattern recognition geht, man braucht halt ein vernünftiges Modell und gute Trainingsdaten.
| |
Und andauernde, ethische Bewertung durch Menschen. Vorher in Studien, während in der Praxis.
Dass ich erkenne, was die Welt/ Im Innersten zusammenhält.
|
|
|
|
|
|
|
| Zitat von flowb
man braucht halt ein vernünftiges Modell und gute Trainingsdaten.
| |
Und man muss sich der Limitationen bewusst sein.
Die Buzzwordscammer um Musk, Thelen und andere Technoclowns sind dabei, genau das zu unterwandern. Mit himmelweiten Lügenversprechen und Anwendungsbeispielen, die so nie funktionieren können.
Weil eben in Behörden das technische Verständnis fehlt, werden irgendwelche Trottellösungen von Trotteln eingekauft, die dann genau sowas machen. Aber weil es am Ende Geld spart, ist das dann erstmal okay.
|
[Dieser Beitrag wurde 1 mal editiert; zum letzten Mal von loliger_rofler am 11.07.2023 16:10]
|
|
|
|
|
|
Das ist korrekt, das macht die hier gern vertretene Meinung, dass KI nicht funktionieren kann, halt nicht valider.
|
|
|
|
|
|
|
Aus dem Kunstbereich zu KI btw. keine wirklichen Neuigkeiten (angemerkt weil der Thread wieder oben ist).
Gibt ein paar neue Lawsuits gegen ChatGPT/OpenAI, das wird sich aber alles sicher noch lange ziehen und nichts was wir noch nicht gehört hätten.
Es gab wohl vor Monaten eine handvoll shovelware Studios in China die große Mengen Artists "ersetzt" haben und in Japan triffts möglicherweise den Manga/Animebereich, weil die KI da anscheinend besser einsetzbar ist als in anderen Stilen.
Ansonsten nichts relevantes zu berichten aus der Art/entertainment-Industry-Bubble.
|
|
|
|
|
|
|
Sagt ja keiner, dass sie nicht funktioniert. Man muss sich nur der Fehlerrate bewusst sein und schauen was aufgrund der Fehler passiert.
Technoclowns sehen halt jeden Output der KI als DIE WAHRHEIT an, das ist das gefährliche.
|
|
|
|
|
|
|
| Zitat von loliger_rofler
Weil eben in Behörden das technische Verständnis fehlt, werden irgendwelche Trottellösungen von Trotteln eingekauft, die dann genau sowas machen. Aber weil es am Ende Geld spart, ist das dann erstmal okay.
| |
wenn die Lösung hingegen genau so fraudulent wie das Originalsystem ist, und dabei einige Trottel wegspart, sehe ich finanziell schon einen Benefit. An der Qualität kann danach geschraubt werden.
|
|
|
|
|
|
|
| Zitat von loliger_rofler
Weil manchmal diskutiert wird, dass generative AI zwar Kunstschaffende schädigen kann, aber Algorithmen prinzipiell hilfreich sind:
|
Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care.
| |
| |
Das AI steht hier wohl eher für
as intended
|
|
|
|
|
|
|
Freut ihr euch auch schon auf den neuen Heidi-Film?
|
|
|
|
|
|
|
Nur wenn der zusammen mit He-Man anläuft.
|
|
|
|
|
|
|
Gerade ist ja Schauspielerstreik, es gab offenbar ein Angebot.
| Duncan Crabtree-Ireland, the chief negotiator for the SAG-AFTRA union, criticised producers for their proposals over AI so far.
He said studios had asked for the ability to scan the faces of background artists for the payment of one day's work, and then be able to own and use their likeness "for the rest of eternity, in any project they want, with no consent and no compensation". | |
https://www.bbc.com/news/technology-66200334
|
|
|
|
|
|
|
Wozu denn echte Personen scannen, wenn man eh schon Gesichter generieren kann?
|
|
|
|
|
|
Thema: Ki generierte Kunst ( mindblowing ) |