Submit to Digest

Towards Principle-Based Policy for U.S. Child Privacy Regulation

Commentary

Nanda Min Htin is an L.L.M. candidate in Tech Law and Policy at Georgetown University Law Center. He is also an International Association of Privacy Professionals (IAPP) Young Privacy Professional and a recipient of the IAPP Westin Scholar Award. He holds an L.L.B. from Singapore Management University.


Introduction

The Senate Judiciary Committee kicked off 2024 with a bipartisan interrogation of five top executives of the nation’s biggest tech companies on the battleground of child online safety and privacy. The Senators categorically chastised the companies’ leaders for “willfully ignoring the harmful content against children on their platforms” and failing to implement protective measures online.[1] But besides providing a venue for the companies’ claims of having invested billions into safety mechanisms and Mark Zuckerberg’s spectacular apology to parents of online child sexual exploitation victims,[2] the hearing also revealed that the legislators themselves have work to do in pursuit of a clear and effective regulatory approach.

Pertinently, child privacy laws in the US revolve around variations of two controversial requirements: parental consent and age verification.[3] First, prescribing parental consent by default could undermine the child’s entitlement to exercise his or her own rights. Second, blunt age verification laws threaten to chill free speech and anonymity online. Notwithstanding merits to parental consent and age verification, this article is primarily concerned with building the thinking behind how these requirements and, more importantly, the underlying policy outcomes, should be communicated to industry. Regulators should focus on principle-based regulation instead of measures that merely “check the box.” This entails respecting children as rights-bearers and facilitating the development of technologies that incorporate privacy by design.[4]

Problems with Parental Consent and Age Verification

Parental Consent Undermines the Need to Treat Children as Rights-Bearers

Relying on parental consent by default for handling children’s data fails to recognize that children are entitled to exercise their own rights. It presumes that children are incapable of doing so and in need of protection by a more “qualified” adult. The Children’s Online Privacy Protection Act (COPPA) has been criticized for enabling paternalistic and authoritarian intervention over the flow of information about children to the public.[5] COPPA’s paternalism stems from its assumption that children are incapable of determining how to safely control their own data.[6] Meanwhile, its authoritarianism stems from the fact that the government authorizes parents to stand between the child and certain information, even when it would be objectively desirable for the child to access that information.[7]

Parental consent has not always been the most reliable measure of protecting children’s rights. Courts have sometimes refused to recognize consent and liability waivers given by parents in tort cases concerning children’s physical injuries during recreational activities in school.[8] Courts’ reasoning in these cases tends to revolve around “guard[ing] minors against improvident parents”[9] and a reluctance to shield commercial endeavors from liability. In EdTech, these principles apply a fortiori. EdTech services are underpinned by commercial contracts, which necessarily imply a distant relationship between the service providers and the consumers (children and parents).[10] Just as with the familiar recreational activity waivers between parents and schools, parental consent to businesses that harness children’s data should also be questioned.

Parents may not always make decisions in the best interests of their children. They may be the ones exposing their children to online risks in the first place. The social media account “Mom.Uncharted” received viral attention for highlighting parental public oversharing, child exploitation, and minor safety on social media.[11] The movement behind the account calls out parents who post sensational videos of their young children and maintain entire social media accounts on their behalf. These children do not have the means or capacity to consent, and later struggle to grow out of an “invisible audience” phenomenon that makes them feel like others are monitoring and scrutinizing them.[12] In this kind of content, the child becomes an extension of the parent, not an autonomous individual.[13] This is not well-reflected in regulatory design because, as a matter of principle, U.S. laws on child protection in the digital space assume that the child initiates every phase of his or her online activity. But in reality, parents or guardians have a consequential ability to manipulate their child’s information without their consent or knowledge.

Lawmakers need to appreciate that parental consent means little when the parent makes the decision about the child’s online footprint to begin with.

Unregulated Age Verification Inhibits Free Speech and Anonymity

Unlike in the physical world, where a minor is unlikely to walk into a bar with a parent’s ID, it is far more difficult to reliably authenticate age in cyberspace. Legal requirements for age verification have therefore surfaced as a means of policing how children portray themselves online. However, the lack of principled guidance on how to conduct age verification threatens children’s rights to free speech and digital anonymity.

Restricting Speech

The additional step of determining age of online users, when poorly implemented, chills online speech by imposing burdensome protocols between the speaker and the act of publication. While age verification sounds like an innocuous mechanical procedure, there is legal recognition that “people may fear to transmit their personal information, and may also fear that their personal, identifying information will be collected and stored in the records of various Web sites.”[14]

In 1996, the Communications Decency Act (CDA) prohibited businesses from distributing “indecent” material to minors under 18 and provided for content access restrictions.[15] The Supreme Court struck down this provision of the CDA as an unconstitutional intrusion on protected speech , especially given how there was “no effective way to determine the identity or the age of a user who is accessing material through e-mail, mail exploders, newsgroups or chat rooms.”[16] Later in 1998, the Child Online Protection Act (COPA) criminalized the distribution of “material that is harmful to minors.”[17] But the Third Circuit held that COPA’s prescribed age verification was “effectively unavailable” because existing technologies were not sufficiently reliable, all while “[deterring] users from visiting implicated web sites.”[18]

Still today, technology has not yet evolved to a point that renders the earlier arguments moot. Current age verification techniques still slow down access to websites and result in higher “bounce” rates on children seeking to express themselves on their preferred platforms.[19] For newer websites and apps that are less established than the widely-used incumbents, it would be even harder to break into the market and provide alternative options for a wider variety of speech to flourish if the new services were required to use age verification.[20]

Preventing anonymity and promoting surveillance

The right to anonymity is enshrined by the First Amendment and the Supreme Court has repeatedly held that the government should not restrict speech for minors more than it does for adults.[21] Courts point to the “chilling effect” on speech of stripping anonymity via age verification.[22] Today, children and teens play a critical role in publicizing and disseminating content, including around sensitive issues such as the #MeToo movement, and anonymity is a key ingredient of encouraging social engagement that needs to be protected.[23]

Besides explicit age declaration, some technologies estimate age based on user behavior and invasive tracking.[24] This resembles a Kafkaesque world, where instead of knowing that a “Big Brother” is watching over us, we now never quite know who is using our information.[25] Unlike in the physical world, where surveillance measures, like CCTV cameras, are often visible and conspicuous, online data surveillance occurs at a much more subliminal level in cyberspace.[26] With immersive technologies such as the metaverse, children and adults alike are likely to anthropomorphize virtual agents and exchange information more freely without appreciating how this data could be exposed downstream.[27] Absent targeted regulation, the way EdTech and entertainment apps are purchased and “invited” into children’s homes and schools likely suffices to establish legal consent to data surveillance. This would frustrate many privacy claims against non-explicit surveillance flowing from the use of those services.

Regulation Focused on Articulating Policy Objectives

This article promotes principle-based regulation in response to the two issues above. In essence, regulators should encourage companies to design services and products in the interests of children, instead of merely complying with detailed rules which may be counterproductive to certain rights. Regulators ought to determine and articulate underlying policy goals before penning broad-brush provisions which simply re-state these two measures without sufficient guidance for industry.

It is worthwhile to reference how Dr. Abby Jacques approaches policy design concerning automated vehicles (AVs).[28] The “trolley problem” is widely associated with AV policy design. It asks which option, A or B, the driver should pick in a given situation to inform a rule for AVs: what is the “right” choice. Jaques seeks to replace this method of generalizing rules from individual choices or “transactions:” “[t]he right question isn’t what would I do if I were forced to choose between swerving and going straight. The right question is what kind of world will I be creating if this is the rule. What will the patterns of advantage and disadvantage be?”[29] This is a form of structural analysis which may not provide the “right” answer as immediately as simply choosing between A or B, but it enables us to look at the overall policy outcome we seek to create after aggregating the individual choices or “transactions.”

In concrete terms, choosing A (avoiding a pedestrian crossing when there is a green light) over B (not avoiding the pedestrian), amounts to implementing the death penalty for jaywalking. This begs the question of whether this is the policy outcome society wants. This perspective also removes us from the individual transaction and elevates us to the bigger picture where social and environmental factors fall under our control. For example, by installing guardrails to prevent crossing a street anywhere but at the crosswalks or during the red light, the “transaction” is no longer limited to an A/B scenario.

It is unnecessary and indeed reductionist to ask whether we should do away with parental consent or age verification. Consider the “transaction” where a child seeks to access an EdTech tool on a multi-function platform such as Facebook. Instead of prescribing check-the-box privacy notices, regulations should compel companies to assess policy questions such as: (1) is the parent or the child better positioned to consent to a particular service; and (2) is the additional consent procedure creating a meaningful barrier to access to essential content in a particular target community. With regards to age verification, what industry (and children) need are not blunt rules on how to interfere at specific stages of the data cycle. Instruments that articulate specific policy goals to preserve some threshold of anonymity or curtail surveillance would be more appropriate.

Accordingly, the following two proposals outline intended policy outcomes and the considerations required to drive compliance through understanding.

Respecting Children as Rights Bearers

The right to privacy should encompass privacy from parents and family amidst a growing global effort to empower the child. The child’s right to informational self-determination, and the need to “[accommodate] their emerging autonomy,” is explicitly recognized in the Global Privacy Assembly’s (GPA) Resolution on Children’s Digital Rights.[30] Australia expressly refuses to specify a minimum age for legal consent and prioritizes capacity instead.[31] Anyone under 18 has the presumed capacity to provide consent unless he or she lacks “maturity,” at which point the parent/guardian may be involved.[32] Notably, even if the child lacks capacity or maturity, he or she should be involved in the consent decision “as far as practical.”[33] This is a pronouncement that the power over privacy should be left to the individual, away from undue paternalistic influence. It remains to be seen if the United States might enshrine a similar position.

The right to freedom of expression and the need to respect the views of children are closely related to the child’s right to privacy. These should be fully integrated as part of the child’s First Amendment rights by providing avenues for children to voice their opinions, especially on matters concerning their own privacy. Pertinently, policymakers should consult children and make laws with them instead of at them. The UNCRC itself, which enshrines the right of the child to have his or her best interests taken as a primary consideration,[34] relied on the input of youths from the process of drafting to disseminating. In a similar spirit, the State of Victoria (Australia) established the Youth Advisory Group (YAG) to seek advice from young people between the ages of 15 and 22 regarding privacy issues that impact them.[35] U.S. lawmaking needs a similar consultative process to reflect the digital autonomy of children, not of their parents.

Age Verification — Regulations to Facilitate Technological Solutions

Professor Lessig propounds that one avenue by which cyberspace can be regulated is to look to its “architectural” components.[36] In our scenario, this entails identifying which aspects of age verification technology are amenable to regulatory guidance. This goes beyond simply prescribing a requirement for age verification without more or dictating specific methods where no technological silver bullet exists.

Fundamentally, privacy by design should be a core policy objective. Companies must “proactively consider privacy issues at every stage of product development by their entire workforce.”[37] Notwithstanding the additional costs, this must be encouraged as a matter of principle, and enforced via ex-post penalties which would justify the initial investment. Newer legislation such as the California Age-Appropriate Design Code (“CA AADC”) have spelt out notions of this by calling for age assurance methods to be “proportionate to the risks that arise from the data management practices of the business, privacy protective, and minimally invasive.”[38] Such statements at least lay the foundation for future guidance on what technologies fall within these aims.

Utah State University researchers propose assessing policy options based on three criteria: (1) the balance between policy objectives and tradeoffs; (2) the specificity of guidance provided to users and companies; and (3) the understanding of how age assurance mechanisms perform in practice.[39] To promote specificity, they propose that agencies such as the National Institute of Standards and Technology (“NIST”) should release guidance on the risk profiles of different online product features to facilitate the development of age verification methods.[40] To promote understanding, they urge state and federal regulators to involve companies in age assurance sandboxes, akin to how euCONSENT has helped data minimization efforts for a pilot age assurance system in the EU.[41]

Similar multifactorial frameworks for thinking about how to regulate age verification, is invaluable for upcoming privacy-preserving age verification efforts. Third-party verification, for instance, shifts the task of verifying age from the company providing the requested service (e.g., game, EdTech platform) to another independent company. The latter is contracted to provide an assurance about the user’s age without sharing the age-related data with the first-party company. Louisiana has already implemented this in collaboration with LA Wallet, the nation’s first mobile platform that can carry a legal digital version of the resident’s driver’s license or state ID to provide real-time age verification.[42]

Besides regulating through architecture, Lessig calls for an optimal mix between four modalities (i.e., law, social norms, markets, and architecture).[43] The interaction between law and architecture is pertinent here. Facilitating the implementation of technologies (e.g., third-party age assurance) should be accompanied with a system for legal enforcement. This could involve audit requirements to ensure accountability by the third-party companies and devising a certification system to signal reliability. A private right of action could be particularly helpful against harmful age verification practices. The Louisiana House Bill 142 recognizes this right against any party conducting age verification that “retain[s] any identifying information of the individual after access has been granted to the material.”[44] In comparison, the CA AADC does not provide a private right of action and only authorizes the state’s Attorney General to issue civil penalties. With the proliferation of unprecedented harms to children, we must not tolerate under-enforcement due to reasons such as the lack of agency capacity.

Conclusion

While this article focuses on privacy, child safety is a far broader issue, as evidenced by the January 2024 Senate hearing concerning topics such as reporting child sexual abuse material.[45] However, turning such efforts into law has proved challenging. By recommending principle-based regulation anchored on clear policy goals over an approach of blunt rules, this Commentary seeks to facilitate the formulation of more practical laws, for privacy and beyond, that may unite more stakeholders towards the common goal of protecting the next generation.



[1] Cecilia Kang & David McCabe, ‘Your Product Is Killing People’: Tech Leaders Denounced Over Child Safety, N.Y. Times (Jan. 31, 2024).

[2] Justin Hendrix, Prithvi Iyer & Gabby Miller, Transcript: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis", TechPolicy.Press (Jan. 31, 2024).

[3] At the federal level, the Children's Online Privacy Protection Act of 1998, 15 U.S.C. §§ 6501-a503(Supp. v 2000) (“COPPA”) imposes certain requirements on operators of websites or online services directed to children under 13 years of age, or those with actual knowledge that they are collecting personal information from that demographic. At the state level, the California Age Appropriate Design Code, A.B. 2273, Chap. 320 (Ca. 2022) (“CA AADC”) imposes even wider obligations than COPPA. Businesses that do not specifically target children, but provide any “online service, product, or feature… likely to be accessed” by users under 18 must, amongst others, conduct some form of age estimation (not exact verification) of its users and configure a “high level of privacy” by default for children.

[4] There is no universal age threshold for children across the world. Hence, this article adopts an inclusive approach by referring to children broadly as anyone under 18, the upper-bound set by the United Nations Convention on the Rights of the Child (“UNCRC”), opened for signature Nov. 20, 1989, 1577 U.N.T.S. 3 (entered into force Sept. 2, 1990) and most state online safety laws in the US. While the US has not yet ratified the UNCRC, it is a signatory and is expected to make laws with serious consideration of these rights. This is exemplified by the CA AADC in its express reference to the UNCRC.

[5] Anita L. Allen, Minor Distractions: Children, Privacy, and E-Commerce, 38 Houston L. Rev. 751, 761 (2001).

[6] Id.

[7] Id.

[8] Zahra Takhshid, Children’s Digital Privacy and the Case Against Parental Consent, 101 Tex. L. Rev. 1417,1455 (2023).

[9] Brendan Sullivan, Kentucky’s Stance on Non-Profit, Parental Liability Waivers: How Everyone Can Profit from Their Enforceability, 47 N. Kentucky L. Rev. 75, 78 (2020).

[10] Takhshid, supra note 8, at 1455.

[11] Mom.Uncharted (last visited Dec. 3, 2023).

[12] Morgan Sung, Their Children Went Viral. Now They Wish They Could Wipe Them From The Internet, NBC (Nov. 3, 2022).

[13] EJ Dickson, A Toddler on TikTok is Spawning a Massive Mom-Led Movement, Yahoo Fin., (Jul. 20, 2022).

[14] Am. C.L. Union v. Ashcroft, 322 F.3d 240 (3d Cir. 2003).

[15] Title V of the Telecommunications Act of 1996 Pub. LA. No. 104-104, 110 Stat. 56. Content access was restricted “by requiring the use of a verified credit card, debit account, adult access code, or adult personal identification number”.

[16] Reno v. Am. C.L. Union, 521 U.S. 844, 855 (1997).

[17] Child Online Protection Act of 1998 Pub. LA. No. 105–277, 112 Stat. 2681. COPA similarly provided for age verification measures including but not limited to the “use of a credit card, debit account, adult access code, or adult personal identification number” or a digital age certification.

[18] Am. C.L. Union v. Mukasey, 534 F.3d. 181 (3d Cir. 2008).

[19] Eric Goldman & Adrian Moore, California’s Age-Appropriate Design Code Act Threatens the Foundational Principle of the Internet, Reason Foundation (Sept. 7, 2023).

[20] Id.

[21] Shoshana Weissman, Age-Verification Methods, in Their Current Forms, Threaten Our First Amendment Right to Anonymity, R St. (June 1, 2023); Erznoznik v. City of Jacksonville, 422 U.S. 205 (1975).

[22] See, e.g., Am. C.L. Union v. Gonzales, 478 F. Supp. 2d 775 (E.D. Pa. 2007) (noting that consumers are willingness to “reveal personal and financial information in order to access content”).

[23] Yousra Elbagir, Anonymity Helps #MeToo Extend Its Reach into New Communities, Fin. Times (July 27, 2020).

[24] CNIL, Online Age Verification: Balancing Privacy and The Protection Of Minors (Sept. 22, 2022).

[25] Daniel Solove, Privacy and Power: Computer Databases and Metaphors for Information Privacy, 53 Stan. L. Rev. 1393, 1419 (2001).

[26] Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501, 503 (1999).

[27] Brittany “Straithe” Postnikoff, When Robots Are Everywhere, What Happens to the Data They Collect?, Brookings (Feb. 15, 2022).

[28] Abby Everett Jacques, Why the Moral Machine is a Monster, We Robot Conf., 2019.

[29] Id.

[30] 43rd Closed Session of the Global Privacy Assembly, Adopted Resolution on Children’s Digital Rights (Oct. 2021).

[31] Office of the Australian Information Commissioner, Children and Young People, https://www.oaic.gov.au/privac... (last visited Dec. 3, 2023).

[32] Id.

[33] Id.

[34] See UNCRC art. 3(1); see also UNHCR Guidelines on Determining the Best Interests of the Child (May 2008).

[35] Office of the Victorian Information Commissioner, Children and Young People, Youth Advisory Grp. (last visited Dec. 4, 2023).

[36] Lessig, supra note 26, at 503.

[37] Mark Mikhael,COPPA: The Privacy Law That Wasn’t, Seton Hall Univ. Student Works, 1176 (2021).

[38] The California Age-Appropriate Design Code, A.B. 2273, Chap. 320 (Cal. 2022).

[39] Scott Brennen & Matt Perault, Keeping Kids Safe Online: How Should Policymakers Approach Age Verification?, Ctr. Growth & Opportunity Utah St. Univ. (June 21, 2023).

[40] Id.

[41] euCONSENT, Electronic Identification and Trust Services for Children in Europe (last visited Dec. 11, 2023).

[42] LA Wallet (last visited Dec. 11, 2023).

[43] Lessig, supra note 26, at 511.

[44] 2022 La. Acts No. 440 (H.B. 142).

[45] David McCabe and Cecilia Kang, Will Lawmakers Really Act to Protect Children Online? Some Say Yes., N.Y. Times (Feb. 1, 2024).