View Full Version : AI Image Generation
TheDemonLord
21st November 2022, 19:56
Has anyone here tried it?
I had a dabble recently, after seeing some of the imagery created from the lyrics of 'Master of Puppets' - and was thoroughly impressed, so I decided to give it a go.
https://creator.nightcafe.studio
Here is the one I used - and this is what 'I' created.
351888351889
in case anyone is curious - alluring 40k Female Assassin and alluring warhammer 40K female assassin inquisitor fierce were the prompts I gave it, as well as picking an art style (which is essentially a whole bunch of prompts in the background)/
The site works off of credits (each image, depending on Complexity of run-time - aka Compute resource, has a cost) - you can earn credits by sharing images or signing up for Premium.
Either way, I think it's kinda fun.
Edit:
Cause we are on a motorbike forum:
Isle of man tt guy Martin motorbike speed knee down- and using the photo theme
351890
george formby
22nd November 2022, 15:48
I have a couple of mates who dabble with this and come up with images for specific promotions.
Some of the stuff I have seen is absolutely mental, freaky, almost scary.
Cheers for the link, might have a wee dabble. Nubile + strawberries + Dachshund..
TheDemonLord
22nd November 2022, 16:41
I have a couple of mates who dabble with this and come up with images for specific promotions.
Some of the stuff I have seen is absolutely mental, freaky, almost scary.
Cheers for the link, might have a wee dabble. Nubile + strawberries + Dachshund..
It's great fun.
One of these is again, Warhammer 40K - asking it for Space Wolves - and I'm very impressed, it's got the colours right (blue and Yellow) and they appear to be wearing something akin to Power Armour, even with the backpack and the runic belt.
It's not quit there as it's pictured with the head of Wulfen (major 40K Nerd lore happening) but it's damn close.
The other two - I decided to go with a more freakish theme(s)
351895351896351897
george formby
29th November 2022, 13:29
This article may, or may not, be of interest to you TDL.
https://www.bbc.com/future/article/20221123-the-weird-and-wonderful-art-created-when-ai-and-humans-unite
The idea of linking literary AI and art AI is interesting, custom work ideas and 3d printers spring immediately to mind.
sugilite
6th December 2022, 19:39
On this very subject - this is scary.
https://www.abc.net.au/news/2022-11-26/loab-age-of-artificial-intelligence-future/101678206
TheDemonLord
7th December 2022, 08:28
On this very subject - this is scary.
https://www.abc.net.au/news/2022-11-26/loab-age-of-artificial-intelligence-future/101678206
It is very interesting.
There are aspects of AI, which are a mirror into those places which we don't talk about due to politeness.
george formby
7th December 2022, 09:18
It is very interesting.
There are aspects of AI, which are a mirror into those places which we don't talk about due to politeness.
For sure. I haven't read much on ensuring that AI has decent human sensibilities.
TheDemonLord
7th December 2022, 09:37
For sure. I haven't read much on ensuring that AI has decent human sensibilities.
I've read a little, but more in terms of the opposite direction - why AI seems to say the 'unsayable'.
For example - I think it was Amazon that did a trial run of having an AI analyze CVs (they never actually used it). The idea being that an AI would have no Racial or Gender or Political biases, it's pure data and so would be impartial and only select the best candidate.
They initially trained it to look for Excellence... Which meant it started automatically filtering out all the Women's applications.
So they went in to try and correct this (IIRC it was told not to look for indicators of the applicants Gender)
However the AI would then simply look for other things like going to a Girls only school or Women's chess club and would filter those out.
Eventually they shut it down.
The interesting question is 'why'? And this isn't an isolated example, there have been numerous AIs where they have gotten very some very spicy outcomes (There's another one where an AI completes the phrase "A Muslim walks into a..." - you can use your imagination as to what sort of Spicy results it came up with)
If you consider the CV AI, if your criteria is being the best, the absolute best, then the AI very quickly learned that all the people that were the absolute best... were men. To quote Jordan Peterson "The Best woman in the world can beat most men, the Best man in the world can beat all women" - and so the AI makes the following logical conclusion:
If I am to look for the best of the best, and Men are the best of the best, then it's logical to screen out all women, as they will never be the best of the best.
It's an absolutely ruthless, yet none-the-less logically correct conclusion.
Now, in the real world, we know to a degree that the absolute peak of human performance is almost always going to be Male (both physical and mental) - but we also know that most situations don't require the absolute peak of human performance - McDonalds don't require an Elon Musk or Bill Gates type genius to run a Drivethru - so in the real world, we don't automatically filter out Women because:
1: Holy shit that would be sexist as hell
2: It would also be illegal as hell
3: The vast vast vast majority of situations fall within the capabilities of an average person (be they men or women)
However it does show that there is something that is 'true' but in reality is unspeakable and morally wrong - yet an AI doesn't have those latter concepts.
pzkpfw
7th December 2022, 16:21
That didn't sound right. The idea that Men were implicitly better (at almost anything) sounded more like a result of biased training. Accepting that idea sounds like incel behaviour.
First google result on this: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
That puts a very different spin on it, and seems to confirm the biased training idea. Give it a few years and they'll try again with a better dataset.
(Many years ago I read of a U.S. military project to train an A.I. to spot tanks. They showed it lots of pictures of trees and bushes and stuff - with tanks hiding behind some of them. It learned to spot them very well in their set of pics. Then they realised all the pictures with tanks had been taken with different camera settings, and it had basically learned to spot the darker pics. (I imagine this project would work better now.))
I'm in I.T. - I don't claim to be the "best of the best", but I do OK. And have worked with plenty of Women who were equal or (with no hurts feels) better than me. Amazon would be daft to not hire Women - for many reasons.
TheDemonLord
7th December 2022, 17:34
That didn't sound right. The idea that Men were implicitly better (at almost anything) sounded more like a result of biased training. Accepting that idea sounds like incel behaviour.
Not 'Men', a very very very very specific subset of Men. Think IT Pioneers, even if you include Ada Lovelace, Grace Hopper, Hedy Lamarr etc. you've got to put them up against the likes of Babbage, Turing, Gates, Musk, Bezos, Jobs, Wozniak, Dorsey, Zuckerberg, Berners-lee etc.
Pick almost any field, you'll see the same distribution, a very small number of people are responsible for most of the innovation within that field and the majority of those will be Men.
Even in fields where Women are the overwhelming Majority - see competitive Scrabble. This has nothing to do with Incel behavior - but your aversion to it is exactly the type of social taboo that an AI isn't burdened with.
First google result on this: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
That puts a very different spin on it, and seems to confirm the biased training idea. Give it a few years and they'll try again with a better dataset.
I disagree with the Bias theory, on two fronts:
1: A lot of the 'bias' studies have been shown to be bunk, the 'research' is not statistically derived.
2: Just having a large number of Males in the data does not explain why it filtered out Women.
Reading the brief, from the Reuters article, it was looking for the top 5 candidates for say a sample of 100 CVs - it was looking for excellence. And what it worked out is simply an expression of my opening line - if all the absolute best in their fields share a common characteristic, then to save time, we will exclude anything that doesn't match that characteristic.
And the fact this continued, despite manual intervention to counter it, suggests that the statistical basis for it is very strong.
(Many years ago I read of a U.S. military project to train an A.I. to spot tanks. They showed it lots of pictures of trees and bushes and stuff - with tanks hiding behind some of them. It learned to spot them very well in their set of pics. Then they realised all the pictures with tanks had been taken with different camera settings, and it had basically learned to spot the darker pics. (I imagine this project would work better now.))
Stories like that highlight just how interesting this sort of process is.
I'm in I.T. - I don't claim to be the "best of the best", but I do OK. And have worked with plenty of Women who were equal or (with no hurts feels) better than me. Amazon would be daft to not hire Women - for many reasons.
Completely agree - but You and I, both in IT aren't the very tip of the iceberg - we are both good enough (I hope...) for our respective roles. If you were to put my CV (as good as it is) up against the type of person that would say being doing an Architectural lead role at the likes of Amazon, I'd be one of the 95 CVs filtered out as well.
Hoonicorn
7th December 2022, 20:23
I've seen music videos set to AI generated art, based on the lyrics. It's really surreal!
It's getting better all the time, I was so impressed by the images made by the computer animation team at Corridor Digital (see below) that use faces of friends to make story book pictures with the same characters.
Imagine making your own graphic novel or story art, album cover or posters?
https://youtu.be/W4Mcuh38wyM?t=23
george formby
2nd April 2023, 10:14
Create your own game character and talk with them... Astonishing how far things have come in 30 or so years.
https://newatlas.com/games/inworld-origins-ai-npc/
R650R
18th April 2023, 09:46
The implications of the mental health and political influence of next generation of children is a real worry
https://joannenova.com.au/2023/04/whats-worse-a-global-fleet-of-interconnected-intelligent-machines-or-8-billion-artificial-best-friends/
I mean we are already seeing the damage an infiltration of the education system has done just imagine when it’s everywhere. Don’t feed the machine
george formby
18th April 2023, 12:31
The implications of the mental health and political influence of next generation of children is a real worry
https://joannenova.com.au/2023/04/whats-worse-a-global-fleet-of-interconnected-intelligent-machines-or-8-billion-artificial-best-friends/
I mean we are already seeing the damage an infiltration of the education system has done just imagine when it’s everywhere. Don’t feed the machine
The current generations of kids, parents and grandparents are already firmly, and usually ignorantly, ensconced in their echo chambers.
KB is a blaring example of the impact. AI will only change the interface.
I enjoy reading this guy's perspective on the internet of things. Jaron Lanier. (https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane)
R650R
21st May 2024, 13:00
Have a listen to this guy trying to sell us the latest con job.
Your next PC has AI permenant installed on a seperate chip.
It’s “safe” because it’s on “your” computer(that’s basically always accessible via net) instead of cloud.
And cough, cough ahem bullshit you have the “option” to turn it off.
How does it work? It’s continually taking screen shots of everything you’re doing on “your” computer analysing that, cataloguing and archiving that info. In the video it’s shown as useful for the woman to find a brown leather bag she’d seen online somewhere.
Of course there’s no reason at all the thought police or intellectual property theirs would ever use this to steal your inventions/creative works or accuse of thought crime minority report style.
https://www.youtube.com/watch?si=PM0FCiN3VxJppDKM&v=uHEPBzYick0&feature=youtu.be
nerrrd
21st May 2024, 17:14
Pretty soon you'll have no choice, they're all onboard with these AI models now.
Just being on the internet puts your personal information out there, imagine what a scammer will be able do with an AI when it comes to impersonating people.
Even Scarlett Johanssonsonson isn't safe.
https://www.stuff.co.nz/culture/350285488/scarlett-johansson-says-openai-copied-her-voice-after-she-said-no
R650R
25th May 2024, 15:49
Trust the science… it seems AI has been used to corrupt the peer review process…
Proving that unpaid anonymous review is worth every cent, the 217 year old Wiley science publisher “peer reviewed” 11,300 papers that were fake, and didn’t even notice. It’s not just a scam, it’s an industry. Naked “gobbledygook sandwiches” got past peer review, and the expert reviewers didn’t so much as blink.
Big Government and Big Money has captured science and strangled it. The more money they pour in, the worse it gets. John Wiley and Sons is a US $2 billion dollar machine, but they got used by criminal gangs to launder fake “science” as something real.
Things are so bad, fake scientists pay professional cheating services who use AI to create papers and torture the words so they look “original”. Thus a paper on ‘breast cancer’ becomes a discovery about “bosom peril” and a ‘naïve Bayes’ classifier became a ‘gullible Bayes’. An ant colony was labeled an ‘underground creepy crawly state’.
And what do we make of the flag to clamor ratio? Well, old fashioned scientists might call it ‘signal to noise’. The nonsense never ends.
A ‘random forest’ is not always the same thing as an ‘irregular backwoods’ or an ‘arbitrary timberland’ — especially if you’re writing a paper on machine learning and decision trees.
The most shocking thing is that no human brain even ran a late-night Friday-eye over the words before they passed the hallowed peer review and entered the sacred halls of scientific literature. Even a wine-soaked third year undergrad on work experience would surely have raised an eyebrow when local average energy became “territorial normal vitality”. And when a random value became an ‘irregular esteem’. Let me just generate some irregular esteem for you in Python?
If there was such a thing as scientific stand-up comedy, we could get plenty of material, not by asking ChatGPT to be funny, but by asking it to cheat. Where else could you talk about a mean square mistake?
Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)
Flood of Fake Science Forces Multiple Journal Closures tainted by fraud
EMIL LENDOF THE WALL STREET JOURNAL
By Nidhi Subbaraman, May 14, 2024
Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday will announce that it is closing 19 journals, some of which were infected by large-scale research fraud.
In the past two years, Wiley has retracted more than 11,300 papers that appeared compromised, according to a spokesperson, and closed four journals. It isn’t alone: At least two other publishers have retracted hundreds of suspect papers each. Several others have pulled smaller clusters of bad papers.
Although this large-scale fraud represents a small percentage of submissions to journals, it threatens the legitimacy of the nearly $30 billion academic publishing industry and the credibility of science as a whole.
Scientific papers typically include citations that acknowledge work that informed the research, but the suspect papers included lists of irrelevant references. Multiple papers included technical-sounding passages inserted midway through, what Bishop called an “AI gobbledygook sandwich.” Nearly identical contact emails in one cluster of studies were all registered to a university in China where few if any of the authors were based. It appeared that all came from the same source.
One of those tools, the “Problematic Paper Screener,” run by Guillaume Cabanac, a computer-science researcher who studies scholarly publishing at the Université Toulouse III-Paul Sabatier in France, scans the breadth of the published literature, some 130 million papers, looking for a range of red flags including “tortured phrases.”
Cabanac and his colleagues realized that researchers who wanted to avoid plagiarism detectors had swapped out key scientific terms for synonyms from automatic text generators, leading to comically misfit phrases. “Breast cancer” became “bosom peril”; “fluid dynamics” became “gooey stream”; “artificial intelligence” became “counterfeit consciousness.” The tool is publicly available.
Generative AI has just handed them a winning lottery ticket,” Eggleton of IOP Publishing said. “They can do it really cheap, at scale, and the detection methods are not where we need them to be. I can only see that challenge increasing.”
The ABC in Australia even wrote about this, but only because it worries about the loss of public faith in its pet universities:
For the ABC, peer review is like the Bible, and universities are the Church. The public must believe!
So the ABC makes excuses… Oh! Those poor poor universities, forced to become billion dollar businesses selling defacto Australian-citizenships to children of rich Chinese families. If only they got more money, their Vice Chancellors wouldn’t have to make do with million dollar salaries, and punishing professors who pointed out fraud, and they’d have time to do research and prevent the fraud instead.
Wiley’s ‘fake science’ scandal is just the latest chapter in a broader crisis of trust universities must address
By Linton Besser, ABC News
It [the Wiley debacle] also illustrates what is just another front in a much broader crisis of trust confronting universities and scientific institutions worldwide.
For decades now, teaching standards and academic integrity have been under siege at universities which, bereft of public funding, have turned to the very lucrative business of selling degrees to international students.
Grappling with pupils whose English is inadequate, tertiary institutions have become accustomed to routine cheating and plagiarism scandals. Another fraud perfected by the internet age.
This infection — the commodification of scholarship, the industrialisation of cheating — has now spread to the heart of scientific, higher research.
With careers defined by the lustre of their peer-reviewed titles, researchers the world over are under enormous pressure to publish.
Suffer the researchers who are forced to pay for fake papers just so they can “do their job”? Sack the lot.
The ABC is part of the reason science is corrupt to the core. The ABC Science Unit is paid to hold junk-science’s feet to the fire, instead it provides cover for the pagan witchcraft that passes for modern research.
The rot at Wiley started decades ago, but it got caught when it spent US $298 million on an Egyptian publishing house called Hindawi. We could say we hope no babies were hurt by fake papers but we know bad science already kills people. What we need are not “peer reviewed” papers but actual live face to face debate. Only when the best of both sides have to answer questions, with the data will we get real science:
In March, it revealed to the NYSE a $US9 million ($13.5 million) plunge in research revenue after being forced to “pause” the publication of so-called “special issue” journals by its Hindawi imprint, which it had acquired in 2021 for US$298 million ($450 million).
Its statement noted the Hindawi program, which comprised some 250 journals, had been “suspended temporarily due to the presence in certain special issues of compromised articles”.
Many of these suspect papers purported to be serious medical studies, including examinations of drug resistance in newborns with pneumonia and the value of MRI scans in the diagnosis of early liver disease. The journals involved included Disease Markers, BioMed Research International and Computational Intelligence and Neuroscience.
The problem is only becoming more urgent. The recent explosion of artificial intelligence raises the stakes even further. A researcher at University College London recently found more than 1 per cent of all scientific articles published last year, some 60,000 papers, were likely written by a computer.
In some sectors, it’s worse. Almost one out of every five computer science papers published in the past four years may not have been written by humans.
Even if one in five computer science papers are written by computers, this is just the tip of the iceberg of the rot at the core of “peer reviewed research”. The real rot is not the minor fraudsters making papers that no one reads to pad out their curriculum vitae. It’s the institutional parasites taking billions from taxpayers to create modeled garbage to justify the theft of trillions. But that’s another story.
PS: Who knew, academic journals were a $30 billion dollar industry?
pete376403
25th May 2024, 16:17
That post looks like it was written by AI.
"What I like is we’re smart enough to invent AI, dumb enough to need it, and still so stupid we can’t figure out if we did the right thing" https://www.thefp.com/p/jerry-seinfeld-duke-commencement
Pursang
25th May 2024, 21:38
That post looks like it was written by AI. [/url]
Certainly looks like plenty of the 'A' part!
SaferRides
25th May 2024, 21:57
Quora is being flooded by AI and it would not surprise me if the upvotes are also coming from fake, AI accounts.
If you think misinformation is a problem now, it's only going to get worse.
Sent from my SM-S906E using Tapatalk
Berries
26th May 2024, 09:01
The good old days when AI meant AI -
The definitive guide to AI (https://en.wikipedia.org/wiki/A1_road_(Great_Britain))
I get awfully confused these days.
R650R
26th May 2024, 10:23
That post looks like it was written by AI.
"What I like is we’re smart enough to invent AI, dumb enough to need it, and still so stupid we can’t figure out if we did the right thing" https://www.thefp.com/p/jerry-seinfeld-duke-commencement
That’s good you’ve picked up on that. It’s actually the first form of AI that we’ve all been using for last 30 years CTRLV and CTRL P. There’s three news articles combined there just due to my laziness.
Now just imagine the current reality that AI is feeding itself on the last 30 years of cherry picked cut and paste arguments from both sides of any debate and that the best and brightness minds are probably out doing intelligent stuff in real world instead of wasting time on internet forums.
I’ve often wondered if the proliferation of free online war games is being used to feed an AI on battle strategies and outcomes. The World of Warships/…. Franchise is Russian based. Just imagine we’re about to go into WW4 with them and a supercomputer is ticking away understanding better than ourselves how we fight.
nerrrd
26th May 2024, 16:18
Was listening to the radio this morning, apparently the AI models are learning to do things that the scientists never expected them to be able to do, in ways which they can’t exactly explain yet.
http://https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/ (https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/)
Things could get very interesting as the tech-bros trip over each other to integrate it into our daily lives and give it instant access to all the products and services we’ve offloaded to the internet.
Powered by vBulletin® Version 4.2.5 Copyright © 2025 vBulletin Solutions Inc. All rights reserved.