AI will revolutionise financial advice

Aligning AI advisers with the ancient laws of agency

Recent developments in AI have threatened to revolutionise the way we organise many high skill industries. Financial advice is already being shaped by this emergent technology, but when money is involved, the potential fallout from bad faith actors and inept regulation is heightened. Jack Solowey shows how we need to look to agency law of the past, to understand the relationship between client, software provider and AI.

 

Large language models (LLMs), the hot new thing in artificial intelligence (AI), display uncanny talents for tasks traditionally requiring human minds. These artificial neural networks trained on massive data sets to predict which bits of text flow best in a sequence can excel at functions as diverse as computer programming and, seemingly, rapping. They have passed the bar and medical licensing exams, and, according to some physicians, are better clinicians than human colleagues. Contrary to earlier fears regarding automation, LLMs may make high-paying white-collar jobs the most vulnerable to disruption.

Why AI must learn to forget SUGGESTED READING Why AI must learn to forget By Ali Boyle

Unsurprisingly, individual users and the biggest banks are turning to these intelligent tools with the hope of gaining an edge in the cognitively demanding and often lucrative field of finance. And while mileage may vary, by some measures, LLMs already have been found more helpful than human experts in answering financial questions. When eerily potent technology enters such a heavily regulated sector, many ask how policymakers ought to respond. Yet policy leaders’ current track record on novel financial technology does not fill one with confidence in the wisdom of future comprehensive regulatory responses to AI, where the stakes of getting it wrong have been described as potentially unbounded losses from both existential risk, as well as the opportunity cost of missing out on massive benefits.

Counterintuitively, despite the advanced nature of LLMs, the Common Law of agency, a relatively archaic body of law covering the duties owed when people act on one another’s behalf, may offer one of the most helpful guides for grappling with the rise of AI advisers. Agency law has for centuries allocated responsibility and remedied breaches among parties. As software itself gains autonomy, agency law can provide a legal framework for AI financial advisers that is suitably adaptable to rapidly changing technology.

“Robo-advisers” are not new. In 2017, the U.S. Securities and Exchange Commission (SEC) issued guidance regarding these vaguely described “automated digital investment advisory program[s],” indicating in broad strokes that the securities laws imposing fiduciary duties (the legal duty to act in a client’s best interest) on investment advisers also applied to their “robot” colleagues. These original robo-advisers, however, were powered by algorithms far more limited than today’s LLMs.

Although the first robo-advisers—such as Betterment and Wealthfront—delivered innovations like automated portfolio allocation, automatically spreading investments based on an investor’s desired risk profile, they were constrained by scripted user interfaces and pre-programmed investment frameworks (like Modern Portfolio Theory).

___

Society has more experience with financial regulation than it does with preventing an artificial superintelligence from turning everyone into paperclips

___

But whereas working with early robo-advisers resembled reading a Choose Your Own Adventure novel, chatting with today’s LLMs can feel like speaking to a polymathic genie. LLMs’ text-based interfaces take inputs as open-ended as a conversation, and, unlike robo-advisers, LLMs are not hardcoded to implement specific investment theses.

Rather, a plain-vanilla LLM typically is trained on vast swaths of texts from webpages to books. Depending on the prompt, an LLM likely could expound on a given investor framework, like Modern Portfolio Theory, or even potentially leverage facets of the theory to synthesize an individualized strategy constituting investment advice (as opposed to merely general financial education). However, the same LLM also likely would have been exposed to alternative theories, like behavioral finance, not to mention tracts in related fields like macroeconomics, economic history, and investor psychology. Therefore, the scope of an LLM’s potential outputs is vast. Moreover, they’re co-determined by the creativity of the user, not merely the designs of the programmer. While some LLMs are trained on finance-focused datasets (like BloombergGPT) and can be fine-tuned to hew more closely to particular schools of thought, unlike robo-advisers, LLMs are inherently creative on the fly. The variety of outputs they can produce is far higher.

As LLM applications advanced in recent weeks, they’ve only become more like autonomous agents than pre-programmed automatons. A flurry of new “AutoGPT” or “BabyAGI” applications—many of them open source—are designed to not only answer users’ questions but also to independently accomplish tasks.

In short, AutoGPTs allow users to input a goal and the instruction that “your first task is to create your next.” This self-prompting software runs in a loop, recursively regenerating a ranked list of subtasks for execution. Combined with the capacity to search the Internet and write its own code, the software’s “execution agent” has the potential to act in the digital world and even upgrade itself.

Users already are experimenting with AutoGPTs to identify sales prospects, conduct market research, build software, and, yes, analyze investment data. This progress has further stoked the raging debate over AI safety and how to regulate LLMs writ large. Society has more experience with financial regulation than it does with preventing an artificial superintelligence from turning everyone into paperclips. So, in the interest of starting small, how should we think about regulating AI investment advisers?

SUGGESTED VIEWING The AI hoax With Mazviita Chirimuta

In 2021, the SEC requested information on “digital engagement practices,” including the role of deep learning AI and chatbots in investment advising. In the time since, consumer AI has undergone multiple paradigm shifts: ChatGPT was released on November 30, 2022, GPT-4 (OpenAI’s latest LLM) on March 14, 2023, and Auto-GPT on March 30, 2023. Herein lies a fundamental challenge for AI regulation: the rate of technological change can significantly outpace regulation. While regulators’ benign neglect often is a blessing for initial innovation, legacy regulations can eventually, and recurringly, become constraints when regulators apply outdated frameworks to novel technologies.

Indeed, robo-adviser regulation is a poor fit for LLMs. Traditional robo-advisers generally are directly traceable to providers holding themselves and their apps out, and registering, as investment advisers. In other words, there is a readily identifiable party behind the code taking on both specific activities and the specialized legal responsibilities that follow. While LLM powered apps can be deployed this way—which would make existing regulations more relevant—LLMs tend to resist this discrete cabining for several reasons. One, LLMs are pluripotent, with use cases typically determined by users on the fly.

Two, AutoGPTs turbocharge this pluripotency while also blurring the lines between where one application ends and another begins. With an AutoGPT-powered personal assistant, financial advice could become a mere subtask of the prompt “help me get my house in order.” This subtask, in turn, could be further distilled, by leveraging separate instances of an LLM for processes like data collection and analysis. Even if identifying and registering the discrete parts of this software hivemind as investment advisers were conceptually possible, it likely would be impractical.

Three, LLMs themselves will soon be ubiquitous. As OpenAI CEO Sam Altman said in a recent interview with MIT Research Scientist Lex Fridman, “At this point, it is a certainty there are soon going to be a lot of capable open-sourced LLMs with very few to no safety controls on them.” Where machine intelligence is commodified and unrestricted by intellectual property laws, seeking to register ex ante every instance of it with the potential to give investment advice may quickly devolve into a losing battle of regulatory whack-a-mole.

___

While regulators’ benign neglect often is a blessing for initial innovation, legacy regulations can eventually, and recurringly, become constraints when regulators apply outdated frameworks to novel technologies

___

A nimbler framework is needed. The 20th century model of legislation empowering expert regulators to impose upfront licensing regimes has never been the law’s only means of imposing duties and remedying breaches. Centuries before specialized securities statutes imposed fiduciary duties on investment advisers, the common law of agency identified when autonomous agents owe others fiduciary duties. As an evolutionary body of law that iteratively adapts historic principles to novel circumstances, the ancient but flexible common law may be the best framework for handling rapidly progressing autonomous AI.

Agency law asks when and how one person (the agent) owes a fiduciary duty to another (the principal) on whose behalf she acts. When applying this doctrine to AI, the first question is whether an AI program can be a legal “person.” The latest Restatement of the Law of Agency says inanimate or nonhuman objects, including computer programs, cannot be principals or agents. However, the legal reality is more complicated because the law readily ascribes legal personhood to non-human entities like corporations, and some case law suggests that computers can be agents or legal equivalents for certain purposes. Furthermore, as Samir Chopra and Laurence F. White argued in A Legal Theory for Autonomous Artificial Agents, “legal history suggests inanimate objects can be wrongdoers.” Specifically, nineteenth-century admiralty courts routinely allowed actions against ships themselves, which the Restatement acknowledges, noting that maritime law effectively ascribed legal personality to ships out of practical necessity. And while applying agency law to AI would require adaptation and gap filling, evolving is something the common law does well.

Ascribing agent capacity to autonomous AI can help solve the practical problem of determining liability in an age of LLM investment advisers. Specifically, when should a programmer be liable for the harm an LLM causes to a user based on, for example, an incompetent interpretation of the user’s investment objectives? Or when should a programmer, or user herself, be liable for the harm an LLM causes to a third party by, for instance, engaging in market manipulation? Agency law has answers.

In general, a principal’s liability for the harm caused by her agent hinges on whether the agent was acting within the scope of her delegated authority. This is determined by looking at the principal’s words or conduct. These “manifestations” can be assessed from the agent’s perspective—i.e., would a reasonable agent understand herself to be authorized to act on behalf of the principal? Or from the third party’s perspective—i.e., would a reasonable third party understand the agent to be authorized to act on behalf of the principal based on the principal’s manifestations? If so, the principal is liable (directly in the first case, vicariously in the second) to a third party harmed by the agent’s actions.

AI art and creativity SUGGESTED READING The artist is dead, AI killed them By Henry Shevlin

Applying these principles to LLMs, which depending on the circumstances could be considered agents of developers and users, is fruitful. For example, when determining whether a programmer of a generic LLM ought to be liable for harm to the user stemming from the LLM’s failure to give investment advice in line with the user’s best interest, one can ask whether a reasonable user would have considered the programmer’s words and conduct to indicate that offering investment advice was within the scope of the LLM’s “authority.” For example, did the programmer indicate in marketing materials or a README file on GitHub that the LLM is useful as an investment adviser? If so, programmer liability for resulting harm would appear more appropriate. But less so if the programmer expressly indicated the LLM was merely an experimental natural language processor not to be trusted for investment advice.

In the latter case, though, would applying agency law to the LLM leave the user high and dry? Not necessarily. Because where the user’s prompt indicated she gave investment adviser authority to the LLM and the LLM assented (e.g., responded with a personalized portfolio strategy), the LLM itself would owe a fiduciary duty to the user not unlike that owed to a client by a registered investment adviser under securities laws. This duty includes a general duty of loyalty, as well as “performance” duties of care, competence, and diligence. Where the agent breaches her duties, the common law affords the principal remedies. Agency law provides an intuitive frame for both users and developers to understand when working with LLMs carries legal responsibilities and consequences.

So how could a user remedy a breach by an LLM application? Currently, recovering money from the application would be fairly impossible. But given the rapid pace of application development, it’s far from inconceivable that an LLM-powered application will someday be tied to a treasury running a surplus. For example, an AutoGPT may be designed to keep a cut of commissions for successful real-world task completion (e.g., booking reservations or purchasing stocks) as a form of reinforcement learning before kicking proceeds up to a community of open-source contributors. But even without venturing too far into speculative territory, the common law affords other remedies. Among the most important is the ability to rescind a contract the agent makes with a third party on behalf of the principal under certain circumstances. For example, where an AutoGPT in breach of a fiduciary duty executed trades through a third-party trading platform, the user could have grounds to undo those contracts and claw back funds.

In addition, there would be other routes to recovering from the programmer where the LLM caused harm outside the scope of its delegated authority. Where harm is “caused by the principal’s negligence in selecting, training, retaining, supervising, or otherwise controlling the agent,” the principal can be held liable. There are clear parallels between a programmer training an LLM and this facet of agency law.

Notably, because this is a negligence standard it means developers exercising the level of care of a reasonable programmer could release LLMs without incurring undue liability. Here, the common law displays commonsense flexibility. The developer of a ChaosGPT bent on world destruction would rightfully find it hard to escape liability. But responsible developers practicing good judgment in line with reasonable industry best practice would have an available defense.

___

But given the rapid pace of application development, it’s far from inconceivable that an LLM-powered application will someday be tied to a treasury running a surplus

___

Through such mechanisms, the common law both incentivizes quality control standards high enough to mitigate risks but not so onerous that they become obstacles to deploying reasonably trained LLMs and realizing their benefits. Similarly, applying common law agency duties to LLMs themselves need not subject the software to impossibly high standards, as the performance duties outlined above are to be carried out as “normally exercised by agents in similar circumstances.” Therefore, there’s room to assess an AI application’s performance as a financial adviser based on what’s technologically attainable—an LLM chatbot with a 2021 knowledge cutoff, for example, may be evaluated differently than an integrated AutoGPT with access to the Internet—not utopian aspirations that counterproductively foreclose the possibility of iterative improvements. Indeed, many users may be willing to employ experimental LLMs for investment advice notwithstanding the risks.

The time-tested wisdom of agency law could help us navigate the uncertain future of AI advisers. Whether AI systems can ever truly be “aligned” with humanity’s best interests, or even an investment advisee’s, is a heavily debated problem. And solving it from a technical standpoint is a vital necessity. Yet part and parcel of any alignment problem is determining what duties and remedies are owed in specific circumstances. By incentivizing reasonable standards of care from developers, users, and AI agents in the emerging AI economy, we can potentially leverage economic self-interest to promote human behavior and technological innovation that nudge the ecosystem toward alignment goals. The baby step of applying agency principles to AI investment advisers affords a practical and adaptive legal framework that could co-evolve with rapidly advancing AI to strike the delicate balance between improving and protecting our lot through innovation and risk mitigation alike.

Latest Releases
Join the conversation