Adobe offers ‘IP safe’ generative AI tool to businesses
Firefly is a ‘commercially and legally safe’ tool that automatically attaches credentials tag that flags AI use | Software firm promises to protect customers from infringement claims.
In a bid to tackle transparency and IP protection in the use of AI, software giant Adobe has opened up its Firefly generative AI tool to businesses, with the promise of financial indemnification against IP claims.
The company announced last week (June 8), that its new Firefly for Enterprise, an expansion of the tool first launched in March this year, will not generate content based on other people’s or brand’s IP.
Instead, in line with its aim to enable the creation of ‘responsible content’, it explained that Firefly is trained on Adobe Stock images, openly licensed content and other public domain content where copyright has expired.
There will also be the opportunity to obtain an IP indemnity from Adobe for content generated by certain Firefly-powered workflows.
“If a customer is sued for infringement, Adobe would take over legal defence and provide some monetary coverage for those claims,” a company spokesperson said, as reported by VentureBeat.
Transparent credentials
Adobe says that in the name of transparency, Firefly automatically attaches a Content Credentials tag to its content, which indicates that generative AI was used.
Adobe’s Content Credentials were designed by Adobe in partnership with the Content Authenticity Initiative.
Credentials serve as a “digital nutrition label”, said the firm, showing data such as name, date and the tools used to create an image, as well as any edits made to that image.
The credentials remain associated with content wherever it is used, published or stored, enabling “proper attribution”.
Adobe also said that it plans to enable businesses to be able to custom train Firefly with their own branded assets, generating content in the brand’s own style and brand language.
Users will be able to access Firefly through the standalone Firefly application, Adobe Express and Creative Cloud.
The product, launched at the Adobe Summit EMEA 2023 in London, a ‘Digital Experience Conference’, is “designed to address the surging demand of digital content at scale and help enterprises streamline and accelerate content creation while optimising costs.”
The company said that the tool “enables every employee across an organisation, at any creative skill level, to use Firefly to generate ready-to-share content that can be edited in Adobe Express or Creative Cloud.”
AI regulations on the horizon?
The issue of IP protection in AI-generated content is increasingly pertinent, seen in the number of high-profile lawsuits cropping up—and with more expected.
Three artists recently sued three AI platforms, Stability AI, MidJourney and Deviant Art, for alleged copyright infringement.
In the same month, the US Supreme Court (SCOTUS) denied Stephen Thaler’s challenge to a lower court’s ruling, drawing a line under the scientist’s long-running battle to name his AI machine as an inventor, at least in the US, highlighting the crossover between human and AI authorship.
And prior to that, the US Copyright Office ( USCO) partially retracted copyright for a graphic novel that used MidJourney to generate the book’s images.
Meanwhile, a Texas judge issued an order this month requiring that all briefs submitted to his court must not have been created using generative AI tools, adding that platforms such as ChatGPT “make stuff up” and are “unbound by any sense of duty, honour, or justice”.
Political momentum
Momentum is gathering politically for tighter controls of AI, with G7 leaders meeting at the Hiroshima Summit last month to discuss pan-global regulation on AI to make the technology “trustworthy”.
The issue is proving to be divisive. The view of a former US Copyright Office (USCO) general counsel, that the use of copyrighted material by AI when training large-language models can be deemed as fair use, was challenged recently by another ex-general counsel.
Earlier this year, the USCO issued guidance on copyright for works of art created using AI, and the USPTO director Kathi Vidal announcing at the International Trademark Association’s (INTA’s) annual meeting in Singapore that her office “would love to use generative AI but we need to make sure it's responsible before we bring it to our door”.
The UK has promised to offer clarity over rules for generative AI companies, with Prime Minister Rishi Sunak insisting that AI should be introduced “safely and securely with guard rails in place”.
The EU inches closer to passing the AI Act, thought to be the first of its kind in the world, while EU tech chief Margrethe Vestager said at a news conference on May 31 that she believed a draft code of conduct on AI could be drawn up within weeks, according to Reuters.
Australia announced last Thursday (June 1) that it plans to regulate AI in order to ensure the tech is used safely and responsibly.
And the Japanese government’s newly formed council on AI strategy has raised concerns about the lack of regulation for AI, including the potential risks it poses to copyright infringement.
WIPR has contacted Adobe for comment, without immediate response.
Already registered?
Login to your account
If you don't have a login or your access has expired, you will need to purchase a subscription to gain access to this article, including all our online content.
For more information on individual annual subscriptions for full paid access and corporate subscription options please contact us.
To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.
For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk