Artificial Financial Intelligence and the Future of Finance

William J. Magnuson is an Associate Professor at Texas A&M University School of Law 

Recent advances in the field of artificial intelligence have revived long-standing debates about what happens when robots become smarter than humans.  Will they destroy us?  Will they put us all out of work?  Will they lead to a world of techno-savvy haves and techno-ignorant have-nots?  

These debates have found particular resonance in finance, where computers already play a dominant role.  The rise of high-frequency traders, quant hedge funds, and robo-advisors is a testament to the impact that artificial intelligence is having on the field. The policy community is only beginning to grapple with the consequences.

But despite the proliferation of predictions about the coming age of robot overlords, the primary danger of artificial intelligence in finance is not so much that it will surpass human intelligence, as that it will exacerbate human error.  It does so in three ways.  

First, because the artificial intelligence techniques en vogue today rely heavily on identifying patterns in historical data, use of these techniques tends to lead to results that perpetuate the status quo (a status quo that exhibits all the features and failings of the data itself). Machine learning algorithms, the strategy that many AI-based systems rely on, attempt to derive rules based on large data sets, rather than apply pre-set rules established by a programmer. In deep learning algorithms, a subset of machine learning, neural networks identify patterns in data, transform those patterns into outputs, and then pass those outputs along to additional units.   The first “layer” of units might identify patterns in the data, and then the next layer might identify patterns of patterns in the data, and so on and so forth. The algorithms associated with deep learning techniques have proven remarkably accurate at improving accuracy and predictive power in machines.  But because these artificial intelligence techniques rely heavily on identifying patterns in historical data, they are dependent on the quality and representativeness of the data they are trained on.  If the data used to train artificial intelligence algorithms is flawed, either through poor selection methods or unavoidable problems in the external market itself, the resulting outputs of artificial intelligence will reflect (and, indeed, strengthen) those flaws.

Second, because some of the most “accurate” artificial intelligence strategies are the least transparent or explainable ones, decisionmakers may well give more weight to the results of these algorithms than they are due.  One of the issues with machine learning-produced results is that they are often difficult to interpret in easily understood language.  Imagine, for example, if you asked a bank why they turned you down for a loan, and, in reply, they handed you the source code of their machine learning algorithm and the data set they used to train it¾this might be an accurate description of what they had in fact done to  reach their result, but it would not be very helpful for understanding the motivation of the decision, or what you might do to change it.  Given the difficulty of explaining and understanding machine learning algorithms and the outputs they generate, financial decisionmakers, including consumers, might simply default, without deliberation or debate, to accepting the conclusions or recommendations that machine learning algorithms make.  Even if decisionmakers are aware of the limitations of machine learning, in the absence of clear methods for refuting or disproving the artificial intelligence outcome, artificial intelligence may take on undue weight in the structure of the financial industry.

Finally, because much of the financial industry depends not just on predicting what will happen in the world, but also on predicting what other people will predict will happen in the world, it is likely that small errors in applying artificial intelligence (either in data, programming, or execution) will have outsized effects on markets.  Artificial financial intelligence may lead to unexpected feedback effects between competing artificial intelligence systems.  One concern is that decisions made by artificial financial intelligence systems may not truly be independent of one another.  In other words, if financial institutions are all deploying similar machine learning algorithms on similar data, they may reach similar results. When humans make decisions, they at least nominally are doing so on their own (even if their decisions may be affected by the decisions of their peers).  Not so for artificial intelligence.  It is much simpler to copy an algorithm than it is to copy a human brain. If two competitors base their artificial intelligence strategies on the same algorithms, they will likely reach the same conclusions about relevant problems.  This may well amplify both the speed and size of market swings, as financial institutions increasingly adopt broadly consistent views of the market.  

So what does this all mean for fintech policy?  Clearly, it does not mean that artificial intelligence has no place in the financial industry.  Artificial intelligence is here to stay, and, what’s more, has much to offer in terms of efficiency, speed, and cost.  Financial institutions would be foolish to ignore the technology, and financial regulators would be similarly foolish to squash it.  But already the short history of artificial intelligence has highlighted the imperfections of the technology—not only does it at times magnify human error, at others it introduces new errors all its own.  As a result, regulatory frameworks will have to be more than just conceptual; they will also have to touch on the operationalization of artificial intelligence. How AI is deployed, and the context in which it is, should attract as much scrutiny as first order questions relating to possible efficiency-gains tied to its use.   And such scrutiny, while optimistic about the transformative potential of the technology, should be clear eyed  about the real-world limitations of artificial intelligence.   Re-iterative back testing should be required of new technological innovations, just as more flexible administrative procedures should be available on the front end to help develop and explore new applications of artificial intelligence in the financial marketplace.  Only then can we hope to have a balanced approach for regulating an artificial intelligence industry that is still very much on the proverbial drawing board.  

This post is based on a presentation by William Magnuson at Scalia Law School’s Program on Financial Regulation and Technology.

About The Contributor: admin
Tell us something about yourself.

Get involved!

Get Connected!

Come and join our community. Expand your network and get to know new people!

Comments

No comments yet