CyberLaw, 25 years later: Innovation, transformation, and an emerging backlash
By Jonathan Rosenoer, Executive and Senior Inventor at IBM
A recognized expert and thought leader at the intersection of technology, risk and regulation, Jonathan Rosenoer authored the first book on Internet law, CyberLaw: The Law of the Internet (Springer Verlag 1997). He served as expert to the Executive Office of the President on the 1997 U.S. Framework for Global Electronic Commerce. In a 2002 Harvard Business Review article, Jonathan forecast, and proposed a technology solutions for, the Cyber risks that would be realized by the rise of Wikileaks.
At IBM, Jonathan is an executive and Senior Inventor with the firm’s new Industry Platforms business, where he focuses on advanced artificial intelligence and blockchain. Prior to IBM, he served as a senior operational risk management executive for two of the world’s largest banks, JPMorgan Chase and BNP Paribas. He started his career as an attorney in the Silicon Valley at Fenwick & West. Jonathan is a member of the State Bar of California and the District of Columbia Bar.
Twenty-five years ago while working as a Silicon Valley attorney, I coined the term “CyberLaw” (and later wrote the book) to describe a new legal domain that would emerge to regulate and control technological deployment and processes in Cyberspace.[1] During the same year, prohibitions on commercial use of the Internet were dropped. Broadly recognized as an engine of transformation, the Internet promised to empower citizens, democratize societies, and change business and economic models. To nurture its growth and future success, the US government urged a market-oriented approach, encouraging self-regulation and preempting potentially “harmful” regulation. But this approach has fueled a classic market failure and there’s strong evidence of a mounting backlash.
Funded by the National Science Foundation, the Internet was initially intended for non-commercial government and research use. Commercial access began in 1992, starting with email services. Three years later, the Netscape IPO opened the floodgates to continuous digitization and disruptive innovation. (Although Netscape was then only fifteen-months old with no profits, it was worth $2.2 billion at the end of the first day of trading) The government quickly recognized the commercial Internet as an opportunity to create tremendous economic value and wealth. In 1997, President Clinton announced a “Framework for Global Electronic Commerce,” and his administration warned government officials not to take actions that might inhibit the new digital marketplace. Since then, myriad connected products and services have been introduced. They include smartphones that connect us, social networks viewed as essential utilities, and apps to pay bills, hail rides across town, rent apartments, and file taxes.
But alarm has been raised about an ever-evolving and escalating risk landscape, particularly as the threat surface swells from digital ecosystems into the physical world. The Internet has been (mis)used as a tool of crime, terrorism, asymmetric warfare, and suppression of democratic institutions. The rise of the Internet of Things has driven anxiety about the impact of security gaps in products ranging from connected cars, to pacemakers and dolls. Similarly, there is growing concern over the potential adverse impact of artificial intelligence and robotics.
These concerns result in an unmistakable and growing demand for accountability and responsibility, which is best understood as a reaction to market failure. Much innovation has been driven by systemic failure to price the true cost of risk—ignoring worst case potential impacts—and regulatory arbitrage—building lucrative opportunities by “figur[ing] out ways to avoid taxes or safety regulations or insurance costs that their old-economy, non-"sharing" competitors are stuck with.” The resulting (mis)allocation of rewards and accumulation of assets proven particularly sensitive to high-severity loss events signals that the market is ripe for intervention.
Market failures in cyberspace are plenty, but one key failure is in the shadow world of technical debt. Technical debt is a concept focused on the costs of unfinished or haphazard technical work, where a company underinvests in its technology so as to preserve capital, to delay development expenses, and, critically, to enter the market faster. Because the true cost of a particular job is deferred, we cannot consider that job complete. Our legal system allows firms to accumulate technical debt without oversight. Statutory loopholes explicitly encourage it by, for example, allowing software application vendors and service providers to expressly disclaim that their goods or services meet minimum quality standards. And while accumulating technical debt allows companies to minimize development expenses, the same debt imposes a substantial burden on consumers.
For cyber, the critical area of unfinished business is security. The Internet does not have a secure technical foundation, as it was built to support communications, not commercial transactions. Attackers need only uncover a single gap or flaw that can be exploited across millions of machines. If a substantial up-front investment is needed to break security, an attacker will identify an opportunity to profit large enough justify the cost (a phenomenon that is named the “inverse CSI effect.”)
Significantly, exploitable security vulnerabilities are not isolated to the Internet platform. They also are found within even loosely-connected systems, where vulnerabilities can grow between versions of installed software, grow with updated releases, and grow through business partners. The 2013 Target breach, for example, began by hijacking an HVAC company’s external access to Target’s network. Obsolete software is a known challenge for systems ranging from modern cash registers to missile-carrying nuclear submarines.
To scale cybercrime to epic proportions, criminals have subverted advanced networking, communication and payment technologies. Cybercrime forums operate broadly as hidden services on the dark web. Encryption software is widely used on smartphones and communication application to block law enforcement access. Virtual cryptocurrencies, such as Bitcoin, enable payments to flow and avoid money laundering controls with the promise of anonymity. A result is a staggering cost shouldered by consumers. In 2012 (prior to the breaches at Yahoo, Target and Home Depot), US consumers suffered nearly twice the losses for identity theft than for all other property crimes.
Generally, consumers can be compensated via the judicial system when companies cut corners. However, U.S. consumers seeking to hold companies directly accountable in court for data breaches have met strong resistance, for reasons including a fear of opening the door to the filing each year of hundreds of thousands of lawsuits. Instead, aggrieved consumers have turned to consumer protection laws to safeguard personal data—an effort beginning to bear fruit as courts continue chipping away at traditional barriers to class actions: the requirements of standing and actual harm.
Despite these real costs, US lawmakers and regulators have proven themselves reluctant to compel corporate accountability. This reluctance is in marked contrast to European attitudes. For example, the protection of personal information is viewed as a human right in Europe, and measures to secure it are amplified by corporate distrust. The extraterritorial reach of the EU General Data Protection Regulation–fueled by the Snowden revelations—and its potential fines of up to 4% of global turnover will re-allocate the economic consequences of a data breach and will likely strongly influence change in US company data protection practices.
Data protection is not, however, the only area where pressure for regulatory change is building. High-profile Internet businesses have flourished by ducking regulatory standards and oversight applied to traditional businesses engaged in similar activities. To do so, they arbitrage regulation by changing the structure of an activity and generating revenue by eliminating or reducing the cost of regulation. Promoters claim that innovative new firms overcome market imperfections and that public policy should adapt to accommodate them. They urge that the platforms they build are best placed to establish standards once set by the regulators, who they claim are unable to monitor compliance due to the scale and complexity that the platforms embody.
Critics warn that lawmakers abdicate responsibility if they do not appreciate the full impact of new technology and uphold the sovereignty and legitimacy of the state and its interests. They point to monopolistic behaviors, as well as unfair labor practices and threats to public safety. These claims have begun to have an effect, driving regulators to assert control in areas ranging from cryptocurrencies to the sharing economy.
As untoward technology-driven events continue to materialize and reach deeper into the lives of a broader range of citizens, they are driving calls to identify, define, and impose societal and technical requirements to rein in current and emerging risks. Artificial intelligence, for example, is being met with concern about the impact of algorithmic bias and is described by a leading technologist, Elon Musk, as humanity's top existential risk. Similarly, robotics has led Bill Gates to suggest taxing companies to fund jobs programs for the displaced. Fears of killer robots has led calls for the U.N. to ban their use internationally.
Twenty-five years after the debut of “CyberLaw,” the evidence clearly establishes a tight coupling between law and technology. Recognition is growing that innovation comes with risks whose costs need to be acknowledged and allocated appropriately. The public, including individual lawmakers and judges, has been broadly victimized by a cascade of cybercrime and has learned from recent events that threats extend beyond individuals to critical infrastructure and democratic institutions. The mass media, noting a backdrop of the outsized profits and market capitalizations of the leading technology companies, raises issues of fairness and questions who in society should shoulder the externalities. They also raise lack of corporate accountability as a root cause and ask whether it results from undisclosed funding and “capture” of academics that may have influenced policymakers. The likely result is that notorious events that touch all of us, such as the Equifax hack, will bring Congress, judges, and other members in society to the table to rebalance the division of incentives and costs with increased regulation, enforcement, and relaxed barriers to class actions.
[1] Cyberspace is a term created by William Gibson that first appeared in a short story named, “Burning Chrome.”