Previously In Parts One and Two…
This is the last article in a three part series on AI law. Previously in parts one and two, I discussed the AI as tortfeasor and how traditional tort theories may apply to semi-autonomous AI, and I discussed how state and foreign regulators are taking AI bias and the protection of consumers seriously. Federal regulators are beginning to grapple with these issues also. Our discussion of AI Law turns now to the topic of robo-advisors, AI speech and AI legislations before Congress.
SEC and Robo-Advisors
Chat bots and voice bots are AIs that interact with people using spoken and the written word. They are used for everything from home assistants, to advising about a person’s investment and wealth. The SEC has regulated virtual robots that give investment advice (“robo-advisors”) as investment advisors under the Investment Advisor Act of 1940. The guidance focuses on three distinct areas identified by the Staff, listed below, and provides suggestions on how robo-advisers may address them:
• The substance and presentation of disclosures to clients about the robo-adviser and the investment advisory services it offers;
• The obligation to obtain information from clients to support the robo-adviser’s duty to provide suitable advice; and
• The adoption and implementation of effective compliance programs reasonably designed to address particular concerns relevant to providing automated advice
Setting aside the issue that this implies that robo-advisors may themselves have agency and fiduciary duties, holding the acts of robo-advisors to a similar standard as acts performed by human investment advisors is not a large leap.
AI Speech Issues
The laws related to AI speech includes First and Fourth Amendment issues, where the government may wish to restrict the speech of people using their AIs, or discover the communication between bots and people. Commercial AI speech, such as that of robo-advisors can clearly be restricted under First Amendment jurisprudence. But for non-commercial AI speech, the question is murkier. Recently, in a murder investigation in Arkansas, police sought certain records of interactions with a home owner’s bot, Alexa which were stored on Amazon’s servers. State of Arkansas v. Bates, Case No. 04CR-16-370 (Circuit Court of Benton County, Ark. 2016). This case involved Fourth Amendment and First Amendment arguments for the protections of bot speech.
In the state’s search warrant , the police sought “certain records, namely electronic data in the form of audio recordings, transcribed records, or other text records related to communications and transactions between an Amazon Echo device .. that was located at [defendant’s] residence … and Amazon.com’s services and other computer hardware maintained by Amazon.com.” Amazon moved to quash the search warrant, arguing among other things that it sought “to protect the privacy rights of its customers when the government is seeking their data from Amazon, especially when that data may include expressive content protected by the First Amendment”. Amazon further argued that “the Alexa Voice Service response conveying the information it determines would be most responsive to the user’s query [is] … protected speech under the First Amendment.” The case was eventually dropped by the prosecutor, but it may be a prelude to future arguments for the protection of AI speech.
Another instance where AI and speech law intersect might be speech-based torts against a public figure by reporters and the New York Times malice standards that protects reporters. This is because automatically generated reporting or robo-reporting is fast growing. But should knowledge or recklessness be imputed on the human reporter if the AI itself knows that statements are false or had reckless disregard for the truth or falsity of such statement? How this plays out is an open question. In any case, as a practical matter, the reporter should monitor what the AI reports, and require its AI vendors to provide the public a way to object to false content.
Congress may also regulate bot speech more directly – in this case, regulating fake news bots. Senator Feinstein has introduced the proposed Bot Disclosure and Accountability Act of 2018 (S.3127) which will amend the Federal Election Campaign Act of 1971 to prohibit certain automated software programs intended to impersonate or replicate human activity for online political advertising. The proposed legations will also permit the FTC to promulgate regulations to require certain public disclosure of software programs intended to impersonate or replicate human activity.
AI Legislations Before Congress
There are also several proposed AI laws before Congress. The most substantive proposed law is the SELF Drive Act (H.R. 3388) passed by the house, and which is before the Senate in the form of the AV Start Act. Among other things, the proposed legislation prohibits a state from issuing licenses for a dedicated highly automated vehicle (DHAV) (level 4 or 5 automated vehicle), in a way that discriminates against those with disabilities. The proposed AV laws may become a model for other federal AI laws, balancing innovation and protection of the public and providing a liability shifting models for AI. There are other proposed laws before Congress that seek to establish studies of AI, such as the National Security Commission Artificial Intelligence Act of 2018 (H.R. 5336); Artificial Intelligence Job Opportunity and Background Summary Act of 2018 (H.R. 4829) and the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 (H.R. 4625). And most recently, Senator Feinstein has introduced the proposed Bot Disclosure and Accountability Act of 2018 as discussed above. Whether these proposed laws will be passed or if passed will withstand judicial scrutiny under First Amendment review, time will tell.
Many open questions remain related to how to regulate AI, such as how we might create laws and policies that protect the public while promoting innovation. Will we need to change burden of proof standards when the evidence for a misbehaving AI might not be readily recorded or understandable? Should we have new standards for AI transparency to prevent bias, or are the current regimes and norms enough? If Congress does not act on AI laws, will state legislators and regulators step into to fill the gap? Regardless of the answer we might come to, it is clear that AI Law is here, and here to stay. The advice I can give to the law or computer science student today in this fast changing arena is to be part of the debate of where AI Law should be and not just focus on the technology. Also, get out of the library and the lab and listen to more music.
Squire Patton Boggs partner Huu Nguyen is a deal lawyer with a strong technical background. He focuses his practice on commercial and corporate transactions in the technology and venture space. This article is based on talks Huu gave with colleagues at Squire Patton Boggs, including Zachary Adams, Corrine Irish, Michael Kelly, Franklin G. Monsour Jr. and Stephanie Niehaus, and he thanks them for all their support.
 See New York Times Co. v. Sullivan, 376 U.S. 254 (1964).