artificial intelligence/AI
Subscribe to artificial intelligence/AI's Posts

Data Mining for AI Systems Training Permitted Under German Law

In a landmark decision, a German district court recently decided that copying images to create a data set that can potentially be used for training generative artificial intelligence (AI) systems does not infringe German copyright law. Robert Kneschke v. Large Scale Artificial Intelligence Open Network, Case No. GRUR-RS 2024, 25458 (Hamburg District Court Sept. 27, 2024)

The nonprofit Large Scale Artificial Intelligence Open Network (LAION) created a data set containing 5.85 billion image-text pairs publicly available on the internet. This data set can be used to train generative AI systems. For the creation of the data set, LAION accessed a preexisting data set with uniform resource locators (URLs) referencing images and their descriptions. First, LAION extracted the URLs and downloaded the referenced images, including a copyrighted work by photographer Robert Kneschke, even though a reservation of use against web scraping was declared on a subpage of the website. LAION analyzed the image descriptions with a software application. The application excluded image-text pairs where text and image content did not match sufficiently. LAION only added validated image-text pairs to its data mining.

Robert Kneschke claimed copyright infringement based on LAION’s download of his images.

The district court explained that LAION’s mere downloading of Kneschke’s images did not encroach on his right of reproduction under German copyright law. The district court further held that LAION’s actions were justified under and in compliance with Section 60d(1) of the German Act on Copyright and Related Rights (UrhG) – a scientific research exception.

Section 60d(1) authorizes reproduction of text and data mining for scientific purposes by research organizations. The district court clarified that the creation of the data set was data mining, even if the purpose of the creation was AI training. As the district court explained, analysis of an image to compare it with a preexisting description is analysis for the purpose of obtaining information. The district court held that even the creation of the data set, which could form the basis for training AI systems, should be regarded as a scientific purpose (i.e., activity in pursuit of new knowledge irrespective of an immediate knowledge gain or subsequent research success). The creation of the data set was found to be a fundamental step for the purpose of using the data set to gain knowledge later. Of note, the data set was published free of charge and thus also made available to researchers involved in AI. According to the district court, because the training and development of AI systems (even by commercial enterprises) is still scientific research, it was irrelevant that the data set could additionally be used by commercial enterprises to train or develop their AI systems.

Although not legally relevant to the outcome, the district court considered the reservation of use declared in natural language (English) on a subpage to be machine-readable and therefore effective.

Practice Note: This judgment will have far-reaching implications for the use of copyright as a barrier to training AI systems.




read more

NO FAKES Act Would Create Individual Property Right to Control Digital Replicas

On July 31, 2024, a bipartisan group of US senators introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024 to protect the voice and visual likeness rights of individuals from unauthorized use in the form of digital replicas, including digital replicas created by generative artificial intelligence (AI). The bill was introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC) and follows a discussion draft released in October 2023. The press release from Senator Coons’ office makes note of the many organizations that support the proposed legislation and includes quotes from representatives of SAG-AFTRA, the Recording Industry Association of America, the Motion Picture Association, OpenAI, IBM and Creative Artists Agency.

Designed to protect all individuals (not just celebrities), the bill defines a digital replica as a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual and that is embodied in a sound recording, image, audiovisual work or transmission in which the actual individual did not perform or appear, or a version of such work in which the fundamental character of the performance or appearance has been materially altered. The bill would grant each individual or right holder the right to authorize the use of their voice or visual likeness in a digital replica, which the bill states is a property right. The bill also would establish the characteristics, requirements and duration of the license rights that can be granted in a digital replica. The right to authorize the use of an individual’s voice or visual likeness in a digital replica would not expire upon the death of the individual and would be transferable and licensable (subject to certain time limitations on the post-mortem right and registration requirements with the Register of Copyrights).

The bill would create a civil cause of action for a rights holder against any person that produces or makes available to the public an unauthorized digital replica and would provide for injunctive relief, actual or statutory damages, punitive damages and attorneys’ fees. There would be a limitations period, however, and any civil action would have to be commenced no later than three years after the date on which a rights holder discovered – or with due diligence should have discovered – the violation at issue. The bill provides certain exceptions and safe harbors for the production or use of digital replicas in news, public affairs, sports, documentaries, commentary, criticism, scholarship, satire or parody, or for online services that remove or disable access to unauthorized digital replicas upon receiving a notification from the rights holder.

The bill would preempt any cause of action under state law for the protection of voice and visual likeness rights in connection with a digital replica in an expressive work, except for certain existing state statutes or common law or state statutes regulating sexually explicit or election-related digital replicas.

On August 5, 2024, the US Patent & Trademark Office hosted [...]

Continue Reading




read more

AI Takeover: PTO Issues More Patent Eligibility Guidance for AI Inventions

The US Patent & Trademark Office (PTO) issued a 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, which focuses on subject matter eligibility for artificial intelligence (AI)-based inventions. 89 Fed. Reg. 58128 (July 17, 2024).

The new guidance is part of the PTO’s ongoing efforts since 2019 to provide clarity on the issue of subject matter eligibility under 35 U.S.C. § 101 and to promote responsible innovation, competition and collaboration in AI technology development as espoused in the Biden administration’s Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The guidance follows on the heels of the PTO’s recently issued Guidance on Use of AI-Based Tools in Practice Before the PTO and Inventorship Guidance for AI-Assisted Inventions.

The new guidance aims to assist PTO examiners, patent practitioners and stakeholders in evaluating the subject matter eligibility of patent claims involving AI technology. The guidance includes three main sections:

  • Section I provides background on issues concerning patentability of AI inventions.
  • Section II provides a general overview of the PTO’s patent subject matter eligibility guidance developed over the past five years.
  • Section III provides an update to certain areas of the guidance applicable to AI inventions.

As in the prior subject matter eligibility updates and discussions, the guidance document’s analysis of subject matter eligibility focuses on the Alice two-step analysis: an evaluation of whether a claim is directed to a judicial exception (i.e., abstract ideas, natural phenomena, laws of nature), and if so, an evaluation of whether the claim as a whole integrates the judicial exception into a practical application of that exception and/or an analysis of whether the claim recites additional elements that amount to significantly more than the recited judicial exception itself. The guidance highlights a number of relevant recent Federal Circuit cases and is further accompanied by three new examples with hypothetical patent claims for assisting PTO examiners in applying the guidance to an analysis of patent claim eligibility under 35 U.S.C. § 101.

The PTO requests written comments to the guidance through the Federal eRulemaking Portal by September 16, 2024. If there is anything to be gleaned from the guidance or the current state of patentability for AI inventions, the topic will remain highly controversial and heavily debated.




read more

PTO Reopens Comment Period for AI Inventorship Guidance

The US Patent & Trademark Office (PTO) reopened and extended until June 20, 2024, the period for public comment on the guidance regarding inventorship in applications involving artificial intelligence (AI) assisted inventions. The guidance was published on February 13, 2024, at 89 FR 10043. The PTO will also treat as timely any comments received between May 13, 2024, and the notice’s June 6, 2024, publication date.

Comments on the inventorship guidance must be submitted via the Federal Rulemaking Portal.

For more information, see our previous report on the February 13 PTO notice and related examination guidance.




read more

Senate Policy Roadmap Steers Generative AI Toward Transparency

In May 2024, the Bipartisan Senate AI Working Group released a roadmap to guide artificial intelligence (AI) policy in several sectors of the US economy, including intellectual property (IP). The group, which includes Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM) and Todd Young (R-IN), acknowledged the competing interests of positioning the United States as a global leader in AI inventions while also protecting against copyright infringement and deepfake replicas. According to the Working Group, a careful balance can be achieved by establishing two requirements for generative AI systems: transparency and explainability.

Under the current regime, AI inventors may hesitate to reveal datasets used to train their models or to explain the software behind their programs. Their reluctance stems from a desire to avoid potential liability for copyright infringement, which may arise when programmers train AI systems with copyrighted content (although courts have yet to determine whether doing so constitutes noninfringing fair use). Such secrecy leaves artists, musicians and authors without credit for their works and inventors without open-source models for improving future AI inventions. The Working Group proposed protecting AI inventors against copyright infringement while simultaneously requiring them to disclose the material on which their generative models are trained. Such transparency would provide much-needed acknowledgment and credit to holders of copyrights on content used to train the generative AI models, according to the Working Group. Although attributing credit does not absolve an alleged infringer of liability under the current legal framework, such a disclosure (even without a legislative safe harbor) may promote a judicial finding of fair use. The Working Group also identified the potential for a compulsory licensing scheme to compensate those whose work is used to improve generative AI models.

The roadmap also recommended a mechanism for protecting against AI-generated deepfakes. Under the Lanham Act, people receive protection against the use of their name, image and likeness for false endorsement or sponsorship of goods and services. But deepfakes often avoid liability through humorous or salacious misrepresentations of individuals without reference to goods or services. The Working Group advised Congress to consider legislation that protects against deepfakes in a manner consistent with the First Amendment. Deepfake categories of particular concern included “non-consensual distribution of intimate images,” fraud and other deepfakes with decidedly “negative” outcomes for the person being mimicked.

If Congress legislates in accordance with the roadmap, the transparency and explanation requirements for generative AI could impact IP law by creating a safe harbor for copyright infringement. Similarly, an individual’s name, image, likeness and voice could emerge as a new form of protectable IP against deepfakes.

Nick DiRoberto, a summer associate in the Washington, DC, office, also contributed to this blog post.




read more

New Guidance Addresses Use of AI Systems, Tools in Practice Before the PTO

The US Patent & Trademark Office (PTO) issued new guidance on the use of artificial intelligence (AI) tools in practice before the PTO. The new guidance is designed to promote responsible use of AI tools and provide suggestions for protecting practitioners and clients from misuse or harm resulting from their use. This guidance comes on the heels of a recent memorandum to both the trademark and patent trial and appeal boards concerning the applicability of existing regulations addressing potential misuse of AI  and recent guidance addressing the use of AI in the context of inventorship.

Patent practitioners are increasingly using AI-based systems and tools to research prior art, automate the patent application review process, assist with claim charting, document reviews and gain insight into examiner behavior. The PTO’s support for AI use is reflected in patent examiners’ utilization of several different AI-enabled tools for conducting prior art searches. However, because AI tools are not perfect, patent practitioners are potentially vulnerable to misuse or misconduct. Therefore, the PTO’s new guidance discusses the legal and ethical implications of AI use in the patent system and provides guidelines for mitigating the risks presented by AI tools.

The guidance discusses the PTO’s existing rules and policies for consideration when applying AI tools, including duty of candor, signature requirement and corresponding certifications, confidentiality of information, foreign filing licenses and export regulations, electronic systems’ policies and duties owed to clients. The guidance also discusses the applicability of these rules and policies with respect to the use of AI tools in the context of document drafting, submissions, and correspondence with the PTO; filing documents with the PTO; accessing PTO IT systems; confidentiality and national security; and fraud and intentional misconduct.

AI tools have been developed for the intellectual property industry to facilitate drafting technical specifications, generating responses to PTO office actions, writing and responding to briefs, and drafting patent claims. While the use of these tools is not prohibited, nor is there any obligation to disclose their use unless specifically requested, the guidance emphasizes the need for patent practitioners to carefully review any AI outputs generated before signing off on any documents or statements made to the PTO. For example, when using AI tools, practitioners should make a reasonable inquiry to confirm that all facts presented have evidentiary support, that all citations to case law and other references are accurately presented, and that all arguments are legally warranted. Any errors or omissions generated by AI in the document must be corrected. Likewise, trademark and Board submissions generated or assisted by AI must be reviewed to ensure that all facts and statements are accurate and have evidentiary support.

While AI tools can be used to assist or automate the preparation and filing of documents with the PTO, care must be taken to ensure that no PTO rules or policies are violated and that documents are reviewed and signed by a person, not an AI tool or non-natural person. AI [...]

Continue Reading




read more

PTO on AI Inventorship: Will the Real Natural Human Inventors Please Stand Up?

On February 13, 2024, the US Patent & Trademark Office (PTO) issued a notice with examination guidance and request for comment regarding inventorship in applications involving artificial intelligence (AI)-assisted inventions. The guidance reinforces the patentability of AI-assisted inventions and sets forth preliminary guidelines for determining inventorship with a focus on human contributions in this process.

The PTO released the guidance in response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023). The executive order mandated that the PTO, within 120 days, present “guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed.”

As in any inventorship determination for non-AI-generated inventions, “AI-assisted inventions must name the natural person(s) who significantly contributed to the invention as the inventor or joint inventor, even if an AI system may have been instrumental in the creation of the claimed invention.” As is the case for all inventions, the threshold question for inventorship in AI-assisted inventions is who made a “significant contribution” to the conception of at least one claim of the patent. For this evaluation, the Pannu factors (Federal Circuit 1998, Pannu v. Iolab Corp.) for inventorship should be considered.

With specific reference to AI-assisted inventions, the notice provides a non-exhaustive list of principles based on the Pannu factors for AI inventorship determinations:

  • Use of AI systems is not a barrier to inventorship. Use of an AI system does not negate a natural person making a significant contribution to an AI-assisted invention. To be an inventor, a natural person must have significantly contributed to each claim in a patent application or patent.
  • Recognizing a problem or obtaining a solution may be insufficient. The mere recognition of a problem or having a general goal or plan to pursue or obtain a solution from the AI input does not rise to the level of conception. The way in which a person constructs a prompt in view of the problem for eliciting a particular solution may be important for qualifying that person as an inventor.
  • Reduction to practice alone is insufficient. Reducing an invention to practice alone does not constitute a “significant contribution,” nor does the mere recognition and appreciation of the AI system output rise to the level of inventorship, especially where the output would be apparent to those of ordinary skill. By contrast, a significant contribution may exist where a person makes a significant contribution to the output or conducts a successful experiment from the output to create an invention.
  • Developing an essential building block of an AI system may be sufficient. A person developing an essential building block of an AI system to address a specific problem, where the building block is instrumental in eliciting a solution from the output, may be a proper inventor.
  • “Intellectual domination” over [...]

    Continue Reading



read more

Deception Inspection: Attorney Faces Discipline for Citing Fake Law

The US Court of Appeals for the Second Circuit referred an attorney for potential further disciplinary measures after the attorney cited a nonexistent case created by ChatGPT. Park v. Kim, Case No. 22-2057 (2d Cir. Jan. 30, 2024) (Parker, Nathan, Merriam, JJ.) (per curiam).

Minhye Park sued David Dennis Kim for an action related to a wage dispute. During the district court proceedings, Park continually and willfully failed to respond to and comply with the district court’s discovery orders. Kim eventually moved to dismiss based on Park’s failure to comply with court orders and discovery obligations. Park opposed. After weighing the requirements of Rules 37 and 41(b), the district court concluded that dismissal was appropriate. Park appealed.

The Second Circuit affirmed the dismissal, concluding that Park’s noncompliance amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders . . . would result in the dismissal of [the] action.”

Separately, the Second Circuit addressed the conduct of Park’s attorney during the appeal, including a citation to a nonexistent case that was generated using the artificial intelligence (AI) tool ChatGPT. After receiving Park’s reply brief, the Court ordered Park to submit a copy of one of the cited decisions. Park’s attorney responded that she was “unable to furnish a copy of the decision,” explaining that she had difficulty locating a relevant case through traditional legal research tools and therefore used ChatGPT to provide the case caption ultimately cited in the brief.

The Second Circuit found that citation to a nonexistent case suggests conduct that falls below the basic obligations of counsel, and thus referred the attorney to the Court’s Grievance Panel for further investigation and consideration of a referral to the Court’s admission committee. The Court explained that any attorney appearing before it is bound to exercise professional judgment and responsibility, which impose a duty to certify that any papers filed with the court are well grounded in fact and legally tenable. Recognizing that ChatGPT is a significant technological advancement, the Court explained that the use of such tools does not excuse an attorney from separately ensuring that submissions to the Court are accurate or legally tenable. The Court concluded that referral to the Grievance Panel was warranted because the brief presented a false statement of law and the attorney made no inquiry at all, let alone a reasonable inquiry into the validity of the arguments presented. The Court also ordered the attorney to provide a copy of the ruling to her client.

Practice Note: The Second Circuit noted that several courts around the United States have proposed or enacted rules addressing the use of AI tools before a court but explained that such rules are unnecessary to inform attorneys that court submissions should be accurate.




read more

Artificial Inspiration? Style Execution by AI Obviates Human Authorship

The US Copyright Office Review Board rejected a request to register artwork made using an artificial intelligence (AI) painting application, finding that the applicant “exerted insufficient creative control” over the application’s creation of the work. Second Request for Reconsideration for Refusal to Register SURYAST (Copyright Review Board, Dec. 11, 2023) (Wilson, Gen. Counsel; Strong, Associate Reg. of Copyrights; Gray, Asst. Gen. Counsel).

Ankit Sahni filed an application to register a claim for a two-dimensional artwork titled “SURYAST.” The work was generated by inputting a photograph Sahni had taken into an AI painting app called “RAGHAV.” Sahni describes RAGHAV as an “AI-powered tool” that uses machine learning to “generate an image with the same content as a base image, but with the style of a chosen picture.” In this case, Sahni took a photograph of a sunset and applied the style of Vincent van Gogh’s The Starry Night to generate the image at issue:

In the application, Sahni listed himself as the author of “photograph, 2-D artwork” and RAGHAV as the author of “2-D artwork.” Because the application identified an AI app as an author, the Copyright Office registration specialist assigned to the application requested additional information about Sahni’s use of RAGHAV in the creation of the work. After considering the additional information, the Copyright Office refused to register the work because it “lack[ed] the human authorship necessary to support a copyright claim.”

Sahni requested that the Copyright Office reconsider its initial refusal to register the work, arguing that “the human authorship requirement does not and cannot mean a work must be created entirely by a human author.” Sahni noted that in this case, the AI required several human inputs such as selecting and creating the base image, selecting the style image and selecting a variable value that determined the strength of the style transfer. He argued that the decisions he made in generating SURYAST were sufficient to make him the author of the work, which meant that the work was the product of human authorship and therefore eligible for copyright protection. Sahni minimized the role of RAGHAV, calling it an “assistive tool” that merely “mechanically” applies “colors, shapes and styles, as directed.”

The Board disagreed, finding that Sahni’s input to RAGHAV was insufficient to make SURYAST a product of human authorship. The Board reasoned that while Sahni did provide the original image and selected the style and a “variable value determining the amount of style transfer,” Sahni was not actually responsible for “determining how to interpolate the base and style images in accordance with the style transfer value.” Furthermore, Sahni did not control where the stylistic elements would be placed, what elements of the input image would appear in the output or what colors would be applied. The Board [...]

Continue Reading




read more

Tragic Ending: Award-Winning AI Artwork Refused Copyright Registration

The US Copyright Office (CO) Review Board rejected a request to register artwork partially generated by artificial intelligence (AI) because the work contains more than a de minimis amount of content generated by AI and the applicant was unwilling to disclaim the AI-generated material. Second Request for Reconsideration for Refusal to Register Théâtre D’opéra Spatial (Copyright Review Board Sept. 5, 2023) (S. Wilson., Gen. Counsel; M. Strong, Associate Reg. of Copyrights; J. Rubel Asst. Gen. Counsel).

In 2022, Jason Allen filed an application to register a copyright for a work named “Théâtre D’opéra Spatial,” reproduced below.

The artwork garnered national attention in 2022 for being the first AI-generated image to win the Colorado State Fair’s annual fine art competition. The examiner assigned to the application requested information about Allen’s use of Midjourney, a text-to-picture AI service, in the creation of the work. Allen explained that he “input numerous revisions and text prompts at least 624 times to arrive at the initial version of the image.” He went on to state that after Midjourney created the initial version of the work, he used Adobe Photoshop to remove flaws and create new visual content and used Gigapixel AI to “upscale” the image, increasing its resolution and size. As a result of these disclosures, the examiner requested that the features of the work generated by Midjourney be excluded from the copyright claim. Allen declined to exclude the AI-generated portions. As a result, the CO refused to register the claim because the deposit for the work did not “fix only [Mr. Allen’s] alleged authorship” but instead included “inextricably merged, inseparable contributions” from both Allen and Midjourney. Allen asked the CO to reconsider the denial.

The CO upheld the denial of registration, finding that the work contained more than a de minimis amount of AI-generated content, which must be disclaimed in a registration application. The CO explained that when analyzing AI-generated material, it must determine when a human user can be considered the “creator” of AI-generated output. If all of a work’s “traditional elements of authorship” were produced by a machine, the work lacks human authorship and the CO will not register it. If, however, a work containing AI-generated material also contains sufficient human authorship to support a claim to copyright, then the CO will register the human’s contributions.

Applying these principles to the work, the CO analyzed the circumstances of its creation, including Allen’s use of an AI tool. Allen argued that his use of Midjourney allowed him to claim authorship of the image generated by the service because he provided “creative input” when he “entered a series of prompts, adjusted the scene, selected portions to focus on, and dictated the tone of the image.” The CO disagreed, finding that these actions do not make Allen the author of the Midjourney-created image because his sole contribution was inputting the text prompt that produced it.

The CO [...]

Continue Reading




read more

BLOG EDITORS

STAY CONNECTED

TOPICS

ARCHIVES