Lensa, an AI portrait app has seen a surge in popularity. However, many artists are questioning the ethics of AI art.

Lensa, the AI portrait app, has reignited discussion among artists over the ethics of creating images with models that have been trained using other people’s work.

Lensa AI is an easy-to-use profile generator that’s affordable and accessible online. But in digital art circles, the popularity of artificial intelligence-generated art has raised major privacy and ethics concerns.

Lensa was launched in 2018 as a photo-editing app. It went viral after it released its “magic avatars” feature. The app uses at least 10 images from users and the neural network Stable Diffusion in order to create portraits in various digital art styles. Lensa AI portraits have been flooding social media, from abstract illustrations to photorealistic paintings. The app was ranked No. The app claimed the No. 1 spot in iOS App Store’s “Photo & Video” category earlier this year.

However, the app’s rapid growth and the rise in AI-generated art over the last months has rekindled debates about the ethics of using models created from other people’s work to create images.

Lensa is fraught with controversy. Multiple artists have charged Stable Disfusion of using their artwork without permission. Many people in the digital art industry have expressed concerns about AI models creating images in mass quantities for such low prices, particularly if these images are imitating styles that actual artists have spent many years refining.

Users receive 50 unique avatars for $7.99 — artists claim this is a fraction the cost of a single portrait commission.

Lensa and other companies claim they are “bringing art to all people,” says Karla Ortiz, an artist. “But what they really are bringing is forgery and art theft [and] copying the masses.”

Prisma Labs (the company behind Lensa) did not respond to our requests for comment.

Prisma posted a long Twitter thread Tuesday morning addressing concerns about AI art replacing actual artists.

The company tweeted: “As theater didn’t die and accounting software didn’t eliminate the profession, AI will not replace artists but it can become a great assistant tool.” We also believe that AI-powered tools will only increase the value and appreciation of man-made art, in its creative excellence. Industrialization is more valuable than handcrafted works.

According to the company, AI-generated images cannot be described as exact copies of any artwork. The thread didn’t address allegations that many artists weren’t willing to allow AI training to use their works.

AI models can be a powerful tool for artists. Many have noted that models can be used to generate reference images that would otherwise be difficult to find online. Others have written about how the models are used to visualize scenes in screenplays or novels. The central issue of the AI art debate is privacy. While art’s value is subjective, it is important to remember that the right to privacy is fundamental.

Ortiz is well-known for creating concept art for films like “Doctor Strange”, but she also paints fine art portraits. She felt “violated of identity” when she discovered that her art was part of a dataset that Lensa used to train its AI model to generate avatars.

TechCrunch was informed by Prisma Labs that it deletes user photos from cloud services used to process images. Lensa’s user agreement provides that Lensa may use photos, videos, and other content from its users for the purpose of “operating or improving Lensa”.

Lensa stated in a Twitter thread that it uses “a separate model for each user” and that “an associated model” is permanently deleted from its servers once the avatars of each user are generated.

According to artists speaking with NBC News, Lensa uses user data to train its AI model. This is contrary of the app’s user agreement.

Jon Lam, a storyboard artist for Riot Games, said, “We’re learning how even if it’s used for your own inspiration, it’s still being trained with other people’s data.” This thing keeps learning as people use it more. It just keeps getting worse and worse for everyone who uses it.

Images from millions of images are used to train image synthesis models such as Google Imagen, DALL–E and Stable Diffusion. The models are trained using millions of images to learn how to associate the arrangement of pixels with the image’s metadata. This typically includes text descriptions of image subjects and artistic style.

Based on the associations learned, the model can generate new images. The model Midjourney created unsettling images when it was given the prompt “biologically correct anatomical description for a birthday cake.” Reddit users described these images as “brilliantly strange” and “like something out of a dream.”

To promote the Nutcracker season, the San Francisco Ballet used images created by Midjourney. Kim Lundgren, chief marketing officer of the San Francisco Ballet, stated that combining traditional live performances with AI-generated art was the best way to “add an unexpected twist to a holiday tradition.” This campaign was heavily criticized by artists advocacy groups. The spokesperson for the ballet didn’t immediately respond to a request to comment.

Ortiz stated that the images look great because of the nonconsensual information they received from artists and the general public.

Ortiz refers to Large-scale Artificial Intelligence Open Network, a non-profit organization that provides free datasets for AI research. LAION-5B, which is one of the datasets used for Stable Diffusion training and Google Imagen development, contains publicly accessible images from sites such as DeviantArt and Getty Images.

LAION trained models have come under fire from artists for their artwork being used in sets without their permission. An artist found her face and medical records using the Have You Been Trained website. Ars Technica reported “thousands” of similar patient records photos were also included in this dataset.

“Now we face the same problem as the music industry with websites like Napster. This was possibly made with good intentions, or without considering the moral implications.”

artist mateusz urbanowicz

Mateusz Urbanowicz (whose artwork was also included in LAION-5B) said that his fans had sent him AI-generated images with striking similarities to his watercolor illustrations.

It is clear that LAION “is not just a research project that somebody put on the internet for everybody to enjoy,” he stated. Now that companies such as Prisma Labs use it for commercial products, it’s evident that it is much more than that.

“Now we face the same problem as the music industry with websites like Napster. This was possibly made with good intentions, or without considering the moral implications.”

While the United States has strict copyright laws for the music and art industries, AI uses copyrighted materials in a murky legal context. The Verge reported that copyrighted materials used to train AI models could fall under fair-use laws. This is more difficult when it comes to AI models’ generated content. It’s also hard to enforce which leaves artists without recourse.

Lam stated that they just take everything, because it’s a gray area and are just exploiting it. “Technology always moves faster than law and law is always trying catch up.”

It is also rare to have a precedent in law for legal action against commercial products that employ AI-trained software. Lam and others from the digital art community hope that a pending lawsuit against GitHub Copilot (a Microsoft product that uses an AI program trained on public code on GitHub) will allow artists to protect their work. Lam stated that he is wary about sharing his work online.

Lam isn’t the only one concerned about his art being posted. Lam stated that he received a lot of advice from students and early-career artists after his recent posts calling attention to AI art went viral on Instagram, Twitter and Instagram.

Ortiz stated that the internet “democratized” art by allowing artists to share their work and connect with others. Artists like Lam have been hired for the majority of their jobs due to his social media presence. Posting online is crucial for securing career opportunities. It’s not worth putting a portfolio of work samples online on a password-protected website compared to the exposure that comes from sharing it publicly.

Lam stated, “If nobody knows your art they won’t go to your site.” It will be more difficult for students to get in the door.

A watermark might not be enough protection for artists. Lauryn Ipsum, a graphic designer, shared examples of “mangled remains” from artists’ signatures in Lensa AI portraits in a tweet.

Some believe that AI art generators are similar to aspiring artists who copy another artist’s style. This has been a source of contention in art circles.

A former game developer created an AI model which generates images using the artist’s unique brush and ink style, just days after Kim Jung Gi, the illustrator, died in October. , the creator of , said was an homage Kim’s work. However, it faced immediate criticism from other artists. Kim’s friend Ortiz said that Kim’s whole thing was teaching people how draw, and that to turn his life’s work into an AI model was “really discourteous.”

Urbanowicz stated that he is less concerned by an actual artist inspired by his drawings. However, an AI model can produce images that he wouldn’t “ever make” and harm his brand. For example, if Urbanowicz was asked to create “a store with watercolors that sells weapons or drugs” using his illustration style and posted the image with his name attached.

“If someone creates art that is based on my style and makes a new piece of art, it’s their work. It is something they created. He said that they learned as much from me as they did from other artists.” “If I type in my name or store to create a new piece, it forces the AI to create art that I don’t want to make.”

Many artists and advocates are also concerned about whether AI art will diminish the value of work done by human artists.

Lam fears that AI-generated images will be cheaper and faster than artist contracts.

Urbanowicz noted that AI models can be trained in order to copy the work of artists, but they will never be able create art that hasn’t been created yet. Urbanowicz said that AI images that look exactly like his illustrations wouldn’t exist if there weren’t decades of precedents. He is optimistic that artists who are interested in visual art will continue to pursue creative careers, even though the future of visual arts is uncertain with apps like Lensa AI becoming more common.

Urbanowicz stated, “Only that person can create their unique art.” “AI cannot create the art they will make in 20years.”

More Stories

Stay informed by joining TruthRow

24/7 coverage from 1000+ journalists. Subscriber-exclusive events. Unmatched political and international news.

You can cancel anytime