Written by Raquel Acosta
Edited by Adam Lewin
Artificial intelligence (“AI”) is, simply put, “the science and engineering of making intelligent machines.” Quintessential examples of artificially intelligent machines include Hal from 2001 Space Odyssey or the robots from Isaac Asimov’s I, Robot series of short stories. Many of the things we think of when we think of true artificial intelligence — such as understanding nuanced language, solving novel problems, or learning through experience — are just starting to be real phenomena. While self-aware robots remain within the realm of fiction, developments in the field of artificial intelligence are advancing our understanding of what computers are and what they are capable of being.
Increasingly, sophisticated computer programs call into question some of the foundational assumptions within the intellectual property (“IP”) regime by autonomically producing works which, if executed by a human author, would qualify for copyright protection. Copyright is intended to “promote the progress of Science and the useful Arts” and grants a limited monopoly to authors over the production and dissemination over their creative expression with the aim of incentivizing more creative work than it inhibits by locking down creative capital. Machines have no intention of creating novel works, nor do they consider incentives as such. With our current technology, only humans can make genuinely creative choices. It remains an open question as to whom, if anyone, would get the rights if all the innovative or novel contributions were the work of a machine. This Comment discusses innovations in AI technology that possess a high enough degree of autonomous computational creativity to require re-examination of copyright standards.
II. A Brief History of AI
There have been different philosophies on what true artificial intelligence would be, yet only recently has advanced AI technology begun to call legal assumptions regarding human authorship into question. Early research into AI encountered difficulties that arose partly due to the implicit notion that to be “artificially intelligent” a program must process information such that the result parallels how an intelligent person would respond in response to similar input. Due to this reliance on producing “human-like” results, many official AI projects developed to produce machines that could perform tasks requiring human-like creativity. However, artificial intelligence researchers have different perspectives on what it means for a machine to be “creative.” In many ways, computational creativity involves the ability for a machine to take in input and process it in a way that results in a novel combination of pre-existing ideas and information.
It is important to differentiate between strong AI — which requires innovative thinking and logical reasoning abilities — and weak AI, which merely creates a program tailored to the narrow function required. These different traditions have different legal implications. Weak AI merely requires that a machine act human, so a programmer would have direct control over the heuristics governing the form of the machine’s output. While the programmers or users of weak AI machines use the machine as a tool, strong AI aims to get a machine to think for itself. Randomness, autonomy, and machine learning are built into strong AI systems, so the human connection is much more attenuated. As such, only the underlying software, rather than the output, is the result of human ingenuity and would be protectable under traditional copyright law.
a. The Turing Tradition and Weak AI
In 1950 Alan Turing — perhaps the most prominent figure in the history of AI — proposed what became known as the “Turing test” to evaluate a machine’s ability to appear human. Participants would converse with the machine or a human in a text-only format. They would then indicate if they believed they were communicating with a human or with a machine. Turing theorized that an AI machine could be considered “intelligent” if it generated responses that were indistinguishable from a real human’s. Turing’s functionalist approach triggered a series of “chatterbots,” or programs which were designed to interact with humans in a realistic way. Chatterbots track innovations in natural language processing (“NLP”), and while many of the earlier chatterbots were in the tradition of weak AI, recent examples often incorporate machine learning (“ML”) techniques.
IBM’s Watson is, at present, the most highly evolved AI developed from the Turing tradition. Watson took advantage of cutting-edge NLP technology to win Jeopardy! against two reigning champions. Watson utilized ML techniques but only innovated along constricted parameters to achieve a narrowly-defined goal. Each question triggered a massive amount of parallel computing as Watson sorted through 500 gigabytes (or about a million books) of content per second. While this is an impressive technological feat, the nuances of human culture have as yet evaded quantification — when Watson was off in its answers, it tended to be drastically off. So for all Watson’s massive computational ability, it was still was in the tradition of weak AI and specifically tailored to perform the task at hand.
b. Machine Learning and Strong AI
A key development within AI programs is the incorporation of dynamic processes we associate with intelligent life. In a shift away from weak AI, which focused on producing human-like output, some projects have begun programming in elements inspired from biological functions. Particularly salient are algorithms inspired by genetics and network structures based on neurological connections. Evolutionary algorithms, of which genetic algorithms are a subset, generate solutions to optimization problems using strategies such as reproduction, mutation, and inheritance.
Artificial neural networks were inspired by the inner workings of the brain and are often adaptive systems that change structure in response to information forms. Neural networks are generally “trained” by being provided with paradigmatic examples from the domain of interest — such as art, science, or technology. The network can learn by increasing or decreasing the dominance of any given neural node depending on the desirability or correctness of its output, just as neurons within a human brain reinforce commonly used neurological pathways but prune undesirable connections.
Using neural networks, Stephan Thaler built a “Creativity Machine” in 1994 that autonomously produced patentable inventions and composed music. The Creativity Machine consisted of two interconnected neural networks. One network had bits of information it had learned during training randomly deleted to generate some internal static, or “noise.” The noise allowed it to generate novel output by filling in the missing information with patterns it extrapolated from training data. The other network was used to analyze the output and adjust the parameters of the first network to optimize performance. If the first network was too noisy, then it would generate output of dubious usefulness, yet if it was too constrained, it would not generate much at all.
Early generations of the Creativity Machine created novel chemical patents and poetry. More recently, creativity machines have been used by the US military to design new weapons. The latest versions have incorporated self-training artificial neural network objects that essentially allow the machines to “dream” in a virtual reality and run simulations and exercise crucial skills that it can perfect in an ongoing bootstrapping cycle. While early creativity machines involved a high degree of tailored training, more recent examples can learn and train themselves with little to no human input beyond the initial engineering. As such, there are instances when there are no creative human choices directly involved in the “creative” output of a fully autonomous machine, even if humans built the machine itself.
III. Origin of Creativity – User, Programmer, or Machine
The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by a man but by a machine.
In works produced in a mechanical medium, “there is broad scope for copyright . . . because ‘a very modest expression of personality will constitute originality.’” While some traditional AI frameworks, such as those following the Turing tradition, result in machines which are little more than tools or puppets, dynamic and self-regulating systems are arising which can operate without the need for human intervention. To qualify for copyright protection, a work must possess a “modicum of creativity,” and be an “original work of authorship” Ideas are held to be beyond the scope of copyright, as well as works which result from random or mechanical processes.
In 1986, computers were officially determined by the Congressional Office of Technology Assessment (“OTA”) to be more than “inert tools of creation.” Yet difficulties arise when attempting to determine the boundary line between mechanical or random processes and instances in which the slight intervention of a human agent results in the production of a copyrightable work. The OTA posited that the question is open as to whether computers are unlike other tools of creation in that they are possible of being co-creators. Some degree of intentionality is necessary to trace a product to its human author, and, when computational creativity is involved, the structure of the underlying software programming determines how attenuated that chain of causality is.
Different techniques for generating a desired output have different copyright implications. A programmer who creates and trains a machine receives copyright protection in the underlying software, yet the extent to which a programmer must contribute to mechanically-produced output in order to claim copyright protection is unclear. For tools stemming from weak AI traditions, such as computer aided design, authorship is clear. Computer programmers design the software for a specific purpose and receive copyright protection in the program itself. If they made some sort of novel invention in the creation of the program, such as developing a predictive algorithm that allows a two dimensional image to be easily converted to a 3D image, they may apply for patent protection of that algorithm. The end user purchases the software to use as a tool, and receives copyright in the works created with it. Were this not so, Microsoft could claim copyright in works produced on Word, Adobe in Photoshop, etc. Projects stemming from strong AI types of endeavors are more ambiguous and the programmer is more likely to claim a proprietary interest. For example, AI machine RACTER allegedly wrote the book The Policeman’s Beard is Half Constructed, yet with no intervening user, William Chamberlain — the researcher who programmed and trained RACTER — has claimed copyright in it.
Advances in AI technologies are making machine authorship a reality, yet the legal standards that govern creative innovation do not take into account non-human innovation. Autonomous systems and learning neural networks do not resemble the self-aware robots that were predicted at the genesis of the AI movement. Research into AI has led to machine learning techniques and autonomous computing systems where human authorship becomes attenuated or nonexistent. Thaler’s Creativity Machine, for example, is capable of independently learning fields and generating novel ideas. Yet copyright law excludes works that result from purely mechanized or random processes, so some of the output of computer programs will necessarily straddle the boundary between what is copyrightable and what is not.
Courts have developed legal tests for examining various aspects of a work to determine what is copyrightable. The end content must be disentangled from all these independently protectable components to see what creative content is left, and whether authorship rights are warranted. Even the most advanced AI program can be reduced its underlying software and hardware components, both of which have independent claim to IP protection. The 1992 Second Circuit case, Computer Associates International, Inc. v. Altai, Inc., is informative on this matter.  The Altai court addressed the issue of whether copyright law protects non-literal elements of software and used the Abstraction-Filtration-Comparison (“AFC”) test to determine whether infringement had occurred. The AFC test lays out the steps to follow when extricating copyrightable expression from uncopyrightable elements of the same work. The abstraction step addresses the idea/expression dichotomy by abstracting the program into separate functional layers and excluding aspects of the programs which are uncopyrightable “ideas” rather than protected expression. The filtration step filters out: 1) elements dictated by efficiency, where, as there may only be a limited number of ways of expressing an idea, protection would be functionally equivalent to granting a monopoly over an idea; 2) elements dictated by external factors, including elements necessary or standard to the expression; and, 3) elements taken from the public domain. After all uncopyrightable elements have been removed, the comparison step compares the initial content to the defendant’s work to see what potentially infringing content is left.
It would be useful to have a modified Abstraction-Filtration-Comparison test to extract copyrightable content from uncopyrightable, purely-mechanized works. Expression remaining after the abstraction step would be filtered with special attention paid to elements which directly and necessarily result from the structure of the AI machine. For example, the work product of an autonomously functioning Creativity Machine which innovated using an open, Internet-based body of knowledge would be subject to a higher degree of scrutiny than a closed system, such as RACTER, where there is a programmer-user who narrowly tailors the machine’s output. In the first case, the creative output is a product of the nature of machine, so copyrighting the content would be like claiming a proprietary interest in the information it gleaned from sifting through the Internet. One may claim that a creative choice was made by some human when the machine was initially pointed toward a certain subject area, but similarly data-miners make decisions when directing their web crawlers to gather a certain data set. Information, or data, is not copyrightable. In a more constrained case, such as that of RACTER, the programmer-user utilizes the machine as a complex tool and makes many creative decisions during its training. Lastly, the modified AFC test would check any remaining content for evidence of direct human authorship. If the creative contribution from any human author was de minimis, the work would default into the public domain.
Works owing their origins to the machine, where all originality results from the machine’s computational creativity, essentially have no human author. Granting copyright privileges where none are warranted creates unjustifiable barriers to access. In cases where there is there is no identifiable user, the law must balance the incentives of the programmer or creator against the benefits that the public would derive from being able to freely use the end product. If allowing AI developers to claim copyrights in their machine’s output incentivizes more creative production, legislators should codify this copyright grant in the law. Conversely, if the protection of the machine or its code itself is incentive enough, then works produced by a creative machine ought to flow into the public domain and be fortified against proprietary claims.
 John McCarthy, Basic Questions, What is Artificial Intelligence?, Stanford U., http://www-formal.stanford.edu/jmc/whatisai/ (revised Nov. 12, 2007).
 Practical applications of artificial intelligence techniques include data mining, automated bots, self-managing systems, as well as computer aided design (“CAD”) or video games.
 U.S. Const. art. I, § 8, cl. 8.
 Dennis S. Karjala, Copyright and Creativity, 15 UCLA Ent. L. Rev. 169, 172–73 (2008).
 See, e.g., Arthur R. Miller, Copyright Protection for Computer Programs, Databases, and Computer-Generated Works: Is Anything New Since CONTU?, 106 Harv. L. Rev. 977, 1073 (1993).
 Mireille Bert-Jaap Koops, et al., Bridging the Accountability Gap: Rights for New Entities in the Information Society?, 11 Minn. J.L. Sci. & Tech. 497, 549–50 (2010).
 William T. Ralston, Copyright in Computer-Composed Music: Hal Meets Handel, 52 J. Copyright Soc’y U.S.A. 281, 292–93 (2005).
 The ultimate pursuit of strong AI endeavors is to produce a type of “seed AI” that would be possible to exponentially increase its own intelligence by redesigning itself. John O. McGinnis, Accelerating AI, 104 Nw. U. L. Rev. 1253, 1256 (2010).
 Alan Turing, Computing Machinery and Intelligence, 59 Mind 236, 433–60 (1950).
 For example, the chatterbot Jabberwacky, who has won multiple prizes over more than a decade, utilized ML techniques to fine-tune responses. icogno, http://www.jabberwacky.com (last visited Feb. 7, 2012).
 In 2011, Watson beat the record-holder for number of wins, and the record-holder for amount of money won. IBM, Watson, http://www-03.ibm.com/innovation/us/watson/index.html (last visited Feb. 7, 2012).
 IBM, Watson and Jeopardy!, http://www.research.ibm.com/deepqa/faq.shtml#20 (last visited Feb. 7, 2012).
 See Jeff Hawkins & Sandra Blakeslee, On Intelligence, 207–10 (2004).
 Simple neural networks consist of three layers — input, hidden, output — which consist of highly interconnected nodes. Somewhat problematically, the nodes are vertically interconnected but not laterally so input would come in, be processed by the “input neurons” which would filter the information through one or more “hidden unit” neurons, bounce back against the “output neurons” which would process everything and broadcast it back. More complicated neural networks attempt to allow for more human-like functions through being able to extrapolate part to whole and consider input over time. Id. at 25–26.
 Neural networks are used in many types of data processing and classification. For example, geneticists train neural networks to predict which genetic sequences are likely to code for proteins, and some spam filters utilize neural networks to maximize accuracy and efficiency. See Sean R. Eddy, What is a hidden Markov Model?, 22 Nature Biotechnology 1315, 1315–16 (2004).
 Ralph D. Clifford, Intellectual Property in the Era of the Creative Computer Program: Will the True Creator Please Stand Up?, 71 Tul. L. Rev. 1675, 1678–79 (1997).
 Imagination Engines, Inc., Robotic Simulation Environments, http://imagination-engines.com/iei_virtual_creative_robots.htm (last visited Feb. 7, 2012).
 Miller, supra note 5, at 1044.
 Bridgeman, 36 F. Supp. 2d at 196.
 See, e.g., IBM, Autonomic Computing, http://www.research.ibm.com/autonomic/ (last visited Feb. 7, 2012).
 Feist Publ’ns., Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 346 (1991).
 17 U.S.C. § 102(a).
 17 U.S.C. § 102(b).
 Compendium II of Copyright Office Practices §503.03(a), “Works produced by mechanical processes or random selection without any contribution by a human author are not registrable. . . . Similarly, a work owing its form to the forces of nature and lacking human authorship is not registrable; thus, for example, a piece of driftwood even if polished and mounted is not registrable.”
 U.S. Congress, Office of Technology Assessment, Intellectual Property Rights in an Age of Electronics and Information, OTA-CIT-302 72 (Washington, DC: U.S. Government Printing office, April 1986).
 Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131, 156 (1997).
 Miller, supra note 6, at 1059.
 See Ralston, supra note 7, at 292–93.
 982 F.2d 693 (2d Cir. 1992).
 The test was refined from a Second Circuit decision written by Learned Hand, “Upon any work . . . a great number of patterns of increasing generality will fit equally well, as more and more of the incident is left out. . . . [B]ut there is a point in this series of abstractions where they are no longer protected, since otherwise the author could prevent the use of his “ideas,” to which, apart from their expression, his property is never extended.” Nichols v. Universal Pictures Corp., 45 F.2d 119, 121 (2d Cir. 1930).
 This is due to the merger doctrine which originates in the idea/expression dichotomy established in Baker v. Selden, 101 U.S. 99 (1879); see also CDN Inc. v. Kapes, 197 F.3d 1256, 1261–62 (9th Cir. 1999).
 The scenes a faire doctrine excludes which for computers may consist of such things as standard programming techniques. See Softel, Inc. v. Dragon Medical and Scientific Communications, Inc., 118 F.3d 955 (2d Cir. 1997).
 Int’l News Serv. v. Associated Press, 248 U.S. 215, 250 (1918).
 Feist, 499 U.S. at 363–64.
 Robert C. Matz, Bridgeman Art Library, Ltd. v. Corel Corp., 15 Berkeley Tech. L.J. 3, 22 (2000).