Swiss perspectives in 10 languages

Artificial intelligence won’t save banks from short-sightedness

didier sornette con cannocchiale
Didier Sornette is the Chair of Entrepreneurial Risks, Department of Management, Technology, and Economics, Swiss Federal Institute of Technology ETH Zurich. Elisabeth Real / Keystone

Banks like Credit Suisse use sophisticated models to analyse and predict risks, but too often they are ignored or bypassed by humans, says risk management expert Didier Sornette.

The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risks aren’t enough to save banks from failure.

According to Didier Sornette, honorary professor of entrepreneurial risks at the federal technology institute ETH Zurich, the tools aren’t the problem but rather the short-sightedness of bank executives who prioritise profits.

SWI swissinfo.ch: Banks use AI models to predict risks and evaluate the performance of their investments, yet these models couldn’t save Credit Suisse or Silicon Valley Bank from collapse. Why didn’t they act on the predictions? And why didn’t decision-makers intervene earlier?

Didier Sornette: I have made so many successful predictions in the past that were systematically ignored by managers and decision-makers. Why? Because it is so much easier to say that the crisis is an “act of God’’ and could not have been foreseen, and to wash your hands of any responsibility.

Acting on predictions means to “stop the dance”, in other words to take painful measures. This is why policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control. 

More

Credit Suisse had very weak risk controls and culture for decades. Instead, business units were always left to decide what to do and therefore inevitably accumulated a portfolio of latent risks – or I’d say lots of far out-of-the-money put options [when an option has no intrinsic value]. Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people started to get worried. When a large US bank [Silicon Valley Bank] with $220 billion (CHF202 billion) of assets quickly went insolvent, people started to reassess their willingness to leave uninsured deposits at any poorly run bank – and voilà.

SWI: This means that risk prediction and management won’t work if the problem is not solved at the systemic level?

D.S.: The policy of zero or negative interest rates is the root cause of all this. It has led to positions of these banks that are vulnerable to rising rates. The huge debts of countries have also made them vulnerable. We live in a world that has become very vulnerable because of the short-sighted and irresponsible policies of the big central banks, which have not considered the long-term consequences of their “firefighting” interventions.

The shock is a systemic one, starting from Silicon Valley Bank, Signature Bank etc., with Credit Suisse being only an episode revealing the major problem of the system: the consequences of the catastrophic policies of the central banks since 2008, which flooded the markets with easy money and led to huge excesses in financial institutions. We are now seeing some of the consequences. 

SWI: What role can AI-based risk prediction play, for example, in the case of the surviving giant UBS?

D.S.: AI and mathematical models are irrelevant in the sense that (risk control) tools are useful only if there is a will to use them!

When there is a problem, many people always blame the models, the risk methods etc. This is wrong. The problems lie with humans who simply ignore models and bypass them. There were so many instances in the last 20 years. Again and again, the same kind of story repeats itself with nobody learning the lessons. So AI can’t do much because the problem is not about more “intelligence” but greed and short-sightedness.

Despite the apparent financial gains, this is probably a bad and dangerous deal for UBS. The reason is that it takes decades to create the right risk culture and they are now likely to create huge morale damage via the big headcount reductions. Additionally, no regulator will be giving them an indemnity for inherited regulatory or client Anti-Money Laundering violations from the Credit Suisse side, which we know had very weak compliance. They will have to deal with surprising problems there for years. 

SWI: Could we envision a more rigorous form of oversight of the banking system by governments – or even taxpayers – using data collected by AI systems?

D.S.: Collecting data is not the purview of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent, very costly to obtain and to manage. This requires huge investments and a long-term view that is almost always missing. Hence crises occur every five years or so.

SWI: Lately, we’ve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?

D.S.: There is greed, fear, hope and… sex. Joking aside, people in banking and finance are in general superrational when it comes to optimising their goals and getting rich. It is not irrationality, it is betting and taking big risks where the gains are privatised and the losses are socialised. 

Strong regulations need to be imposed. In a sense, we need to make “banking boring” to tame the beasts that tend to destabilise the financial system by construction.

SWI: Is there a future in which machine learning can prevent the failure of too big to fail” banks like Credit Suisse, or is that pure science fiction?

D.S.: Yes, an AI can prevent a future failure if the AI takes power and enslaves humans to follow the risk managements with incentives dictated by the AI, as in many scenarios depicting the dangers of superintelligent AI. I am not kidding.

The interview was conducted in writing. It has been edited for clarity and brevity.

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR