Data Science in Economic and Finance for Decision Makers with David Bolder, Director of Model Development and Economic Capital at Nordic Investment Bank 

In this latest episode of the FNA Talks podcast series, David Bolder, Director of Model Development and Economic Capital at Nordic Investment Bank.

In this latest episode of the FNA Talks podcast series, David Bolder and FNA’s Central Bank and Academia Programme Manager, Will Towning, discuss one of the most awaiting Fintech publications of 2021: Data Science in Economics and Finance for Decision Makers, edited and co-authored by the ECB’s Per Nymand-Andersen.

David, who co-authored the book, and Will review the key themes of David’s contributions, highlighting the practical implications and demystifying common disillusions of where data science fits into a decision maker’s tool kit.

The book, published in March this year, provides an overview of how digital transformation and data science can support decision making and offers and variety of perspectives on managing digital data.

Contributions from FNA’ers Kimmo Soramäki, Ivana  Ruffini, Mikhail Oet, Tuomas Takko and Adam Csabay also feature in the book. The chapter FNA authored, Prudential Stress Testing in Financial Networks, provides a taxonomy of organizational problems racing firms operating in financial markets and offers a comprehensive approach to information system design that addresses stress testing in networks. 

The FNA Team’s work sits amongst contributions from over 20 global experts from both the private and public sectors.

Listen and subscribe via:

Spotify

iTunes

 

 

 

 

Podcast Transcript: 

Will Towning: 

Good day listeners and guests and welcome to the FNA Talks, a technology update with FNA’ers and friends. My name is Will Towning and it is my pleasure to be your host during today’s episode. 

 

At FNA Talks, we are drawing on the experience of key fintech, regtech and suptech authorities to discuss trends and developments which are defining the technology and innovation landscapes. 

 

Today, I am joined by David Boulder from the Nordic Investment Bank. David is a co-author of a recently published and important book titled Data Science in Economics and Finance for Decision Makers, which will be our focus for today’s episode. 

 

Welcome David, and thank you for joining us at FNA Talks. 

 

David Bolder: 

Thank you. 

 

Will Towning: 

To kick things off, please can you briefly introduce yourself and tell us a little bit more about your background.

 

David Bolder: 

Well, my name is David Boulder. I’m a quantitative analyst. That’s how I would identify myself. My educational background is a mix of stuff. I’ve studied mathematics, finance, management science as well as operational research in business at graduate level and I’ve worked in quite a range of quantitative roles over the last 25 years or so in a bunch of different places. 

 

The first decade of my career, I was a researcher at the Canadian Central Bank, that’s where I started out. I spent eight years at the Bank for International Settlements in Switzerland, five years at the World Bank in DC, and currently, I’m a director of Model Development, Economic Capital at the Nordic Investment Bank. I’ve got a few dozen publications, and written a couple of books on various topics. I’ve got another book, just my way of self advertisement, coming out next year on Economic Capital. And I guess, to summarize, what I’m saying is I’m not a Data Scientist, but I’ve used a lot of these techniques in practical ways, along with a lot of other different technical techniques working on quantitative problems over the years. 

 

Will Towning: 

Okay, thank you David. And turning to your contribution to the book, please could you share with us what your chapter covers, and why this is important for decision makers? 

 

David Bolder: 

Well, a few years ago, Per, who’s the editor of the book, asked me to speak at a risk conference. And the idea was to basically run some kind of presentation about machine learning. And at the time, I was living in America, in the United States, and machine learning really, really captured the popular imagination. It was all over the media. It felt like you couldn’t open a newspaper or magazine without something across something about machine learning. And as I said, although I’m not a data scientist, I’ve used these techniques fairly extensively over the years. And I have to be honest, I was a bit dismayed, dismayed and often a little bit disappointed about how these ideas were being represented and discussed. And so, I thought, given the general confusion, I had this idea, maybe what I should do is sort of take it back to something that people understand a bit better: classical statistics.

 

Admittedly, not everyone is entirely, comfortable with statistics either, but most business, economics, and finance students have taken at least one or two courses in statistics over the years, and some of them have worked very hard to forget everything they’ve learned in statistics courses, but it’s still back there in the back of the mind. 

 

Another thing I think is important is that people understand statistics a bit better, and people don’t wax poetic about it, particularly in the media, rather than the contrary. So, the main idea of the chapter is really trying to compare, more or less, the conceptual principles of the fields of data science and classical statistics. 

And you might ask, well, why is that important? Well, it turns out these two fields have common origins and they’re really attempting to solve the same underlying problem. They just do it with different strategies. Machine learning is a really powerful and useful collection of methods, but it’s still a modern technique. It is not a panacea to all our problems, which is the way that it is sometimes portrayed in the media. And to me, demystifying it somewhat helps us understand the structure, the strength and weaknesses is kind of the first step in being able to effectively use these really, very useful tools. So, I would say that that’s kind of the main point of the chapter. 

 

Will Towning: 

Okay, thank you. Yeah, that ‘s really interesting. I guess, sort of building on this, would you argue the two main insights for demystifying data science? 

 

David Boulder: 

Well, let me start out. You know, an important part of the paper is borrowed from some work by a guy named Leo Brian, who is a trained statistician and was an innovator in the field of machine learning. And about 20 years ago, he wrote a really, really nice paper and what he did was to introduce this idea of the two cultures of statistics and data science. I think it really helps us get to the point in terms of demystifying Data Science. The idea is a common problem that statistics and Data Science face is this common problem: 

 

You’ve got some quantity, let’s call it Y, it could be an oil price, it could be an exchange, it could be anything. And one is seeking to describe that quantity, Y though some set of explanatory terms, let’s call them X just for lack of originality. And this could be a whole bunch of different things Right? It could be supply factors, financial factors, economic variables, there’s a whole bunch of different things that this could be. 

 

But the basic problem is that we want to explain what Y and X are. This is the common problem that both are facing. And, in between Y and X is the real world, which we don’t know or will never know or even completely understand. And the basic idea is that statisticians, they essentially try to replace the real world with a mathematical model. And of course, this is an artificial certification, but it has some benefits, not least of which is the capacity to perform statistical interest. The Data Scientists, however, take a very different strategy. Basically, they don’t have a model in between the two. They basically take a detour around it, and they basically use an algorithm to do this. And the way to determine what the best is or which one does the best job of predicting Y given X. That’s the idea. 

 

Now, both ideas are really smart, very clever ideas, but they have very different fundamental implications. So, the statistical model basically goes or goes for some notion or predictive accuracy and flexibility for the ability to make statements about model and parameter uncertainty. And that’s the idea of statistical inference. 

 

Data scientists are much more flexible, and they’re much stronger in prediction, but often they lose something in terms of interpretability. And this requires a bit of caution to avoid tripping over the incremental flexibility that comes to the models. 

 

The important point, though, I think is this kind of central ideal, that it’s not really a binary thing. We can think of statistical and Data Science models as lying on some kind of spectrum and that they complement one another. So, I think that would be, I think, really the sort of two points that I think are important to testify. 

 

Will Towning: 

Thank you, David. Elaborating on what we’ve just touched on. What advice or recommendations would you offer to decision makers for starting Data Science? 

 

David Bolder: 

Well, the first thing I would say would be to don’t delay. The field of Data Science is pretty big, and it’s getting bigger all the time. And it’s not really a particularly trivial thing for a decision maker to get a handle on all the ideas. You’ve got whole collections of algorithms, and they’ve got crazy names: penalizes regressions, tree-based methods, support vector machines, neural networks, etc. There’s a lot of  complexity there. There’s different types of learning, there’s different ensemble techniques, boosting , bagging, stacking. There’s a lot of things to get your head around. You can go out as a decision maker and hire a very qualified Data Scientist, but you are the bottleneck as the person making a decision and you need to gain some familiarity and comfort with these techniques, because otherwise, then, essentially the whole technique becomes a black box for you. And the idea of a black box, something you don’t really, fully understand, but it’s actually giving you insights into a process, in my view, is a recipe for making suboptimal decisions, even getting things wrong. So, it’s better to start now and gain concept experience. 

 

And the second piece of advice that I would give would be to embrace the idea that statical learning techniques or machine learning techniques aren’t automatically the best solution to a problem. This is partly technical, but primarily conceptual. The question is, what kind of decision are you taking? Some decisions you need to form a high-quality prediction, and that’s what’s really important to you. And that might push you towards a machine learning technique. But you might also be interested in understanding the nature of various parameters, understanding different types of parts of various relations, how strong those relationships are, and in that case, you might be more likely to use a statistical technique. 

 

The important point I think here is that you need to invest time to understand what your problem is and what the most important technique is. If you think about this in the context of a workman’s that comes to your house and something is broken. They don’t reach into their toolbox and pull up a tool before they even understand what problems they have to face. That’s not going to work right? And so, in our case, it’s exactly the same. You want to identify the problem, understand the problem, and then open up your toolbox and say, okay, which tool should I pull up to solve the problem? So, to summarize, these two points, don’t delay, get into it as quickly as possible, there’s a lot to learn, and second of all, I also understand that just because you’re understanding and learning and adding these machine learning techniques to your toolbox, it doesn’t necessarily always mean it’s the right one. 

 

Will Towning:  

Thank you David. Some really interesting sort of insights and things to think about there when sort of starting data science. So, returning back to the book, if you boil it down, what are the core lessons you think decision makers should draw from your chapter? 

 

David Bolder: 

Well, I think there’s too many points. If we could come back to this idea of placing statistical models and data science techniques on a spectrum. I think that would really help us understand this. So, some models, let’s say a generalised linear model which is a very well known linear regression, is a member. These are decidedly about inference. And let’s put them on the left-hand-side of the spectrum here. So, that’s a decidedly inference thing. Other methods as you’ve probably heard of neural networks, these are really relatively complicated techniques, not so easy to understand, and they’re very focused on prediction. So, let’s put them on the right-hand-side of our spectrum/ So, we’ve got sort of in between these two extremes, we’ve got the whole field of data science and statistics, and we can sort of order all of our techniques from one from left to right. What’s important to understand is we can’t just slice down the middle and say, okay, everything to the left is statistics and everything to the right is Data Science. There are some black and white. Like I said, the linear regression in the neural network. But there is also a lot of grey. And, I don’t think that this is a particularly controversial idea for most practitioners, but it’s not generally well understood, I think by people who are not, or non-experts. And often, you hear statuses like, okay, data science is entirely independent of status. It’s another field or you hear, well data science is nothing but glorified statistics. You hear both, and of course, they’re going in different directions and both are basically wrong. 

 

The other point, I think in my chapter, I think is important is related to this, it is what’s called the variance bias trade-off. Again, this is very well known to practitioners, but maybe not so well appreciated by your typical decision maker. The idea is that there’s a notion of bias and a notion of variance, and you can’t make both of them small and not the one on there that is particularly appealing. In the statistics sense, we often accept a certain amount of bias. So, to come back to linear regression, we use the assumption of linearity, and not everything is linear. And we often use linearity, even when there are non-linearities that are present. Now, this is not always a great assumption, but it has one very important benefit, its stable meaning you can reshuffle the data set. You can collect data from another data set from your population. You’re going to get very similar results/ Low variance, so there’s some bias, but you get a higher variance. So this is a lower, bias, or high bias, low variance solution. 

 

Data Scientists go the other way, they typically have much lower bias. They are very flexible and have very nice falls that capture a lot of complexity. But the flip side is that sometimes they’re too flexible. They’re so flexible that they can sort of fit or over sort of describe a particular pattern. And so, when you reshuffle or collect a new data sample, the results can change in an important way. This is what’s called an over-spinning problem. Data Science is basically following a low bias, then a higher strategy as opposed to statisticians who have low bias, low variance, but that’s not necessarily very easy to accomplish. The point, I think of the chapter and these two ideas is to say that you’ve got all these pieces that need to be worried about. You need to think about the notion of inference, the notion of prediction, bias, variance, and all of these are elements within the statistics and the data science world that need to be managed and understood. 

 

Will Towning: 

Thank you very much, David. It’s really interesting. And before we conclude, I’d like to spend a little time thinking about the future. Could you please share with us your vision for Data Science and how you see it evolving over the coming years? 

 

David Bolder: 

Sure. 25 years ago, at the beginning of my career, I can, I think I can say, quite honestly that Data Science wasn’t really a field yet. In fact, there were some ideas and techniques that were often called data mining or data discussion. And I would say that, you know, a lot of those ideas were viewed with a certain amount of disdain by the statistical community. But even in a practitioner community. And it’s quite clear that over the course of the last quarter of a century, this has really changed. But what’s interesting and important to understand, I think, is that the two fields still remain surprisingly distant. 

 

Data Science is now unequivocally its own field. And if you don’t believe me, just look at the number of publications, the number of journals, of Data Science, the number of master’s degrees that are out there offering an opportunity to learn about this area. So, it is its own field. But despite the fact that they have common origins, they’re facing similar problems, statistics, machine learning, have a surprisingly little amount of interaction. Why that is, is not one hundred percent clear to me. It perhaps has something to do with academic structures, I don’t know. I think that there’s some, a little bit of misunderstanding and maybe even a light amount of animosity between the two different fields because of the way they approach the problems. 

 

But, as a practitioner, I see this very differently. My job, my day job is to solve nasty, thorny problems and what I need to be able to do that, are tools. And I can’t help but think that when I look at this, I view this as a spectrum, and I view this as tools that in some cases I could use a Data Science technique. Sometimes I could use statistical infers techniques. Sometimes, I could use both. And I think I would argue that there’s a lot of value in integrating these two fields more. And I think both sides can benefit from that interaction and the understanding between it. So, that would be my sort of view and vision for Data Science, it maybe a little bit, a little bit selfish, but that’s certainly my perspective really. 

 

Will Towning: 

Thank you, David. It’s been a pleasure speaking with you, and we’re very grateful for the insights and lessons you have shared with us today. So, I’m now very much looking forward to reading your chapter. 

 

David Bolder: 

Thank you very much, it’s been a real pleasure. Thanks for inviting me 

Will Towning: 

And as always, many thanks to our listeners for your attention. 

 

If you have any questions or comments for David or myself, please do not hesitate to reach out on social media or write to us as Will@FNA.fi.

 

 

More News

FNA Talks Data Science in Economics and Finance with the Bank of England

FNA Talks Data Science in Economics and Finance with the Bank of England    Adrian Waddy, Data Consultant at Australian Prudential Regulation Authority and Developer at the Bank of England, joins host Adam Csabay to discuss his contribution to the Risk books publication, Data Science in Economics and Finance for Decision-makers. Adrian’s chapter, Implementing Big […]

Reconstructing and Stress Testing Credit Networks

By Amanah Ramadiah, Fabio Caccioli, Daniel Fricke Financial networks are an essential source of systemic risk. Unfortunately, detailed data on (direct and indirect) interactions between individual financial institutions is often unavailable, and the only the total aggregate position is known. To conduct a stress test, one must resort to network reconstruction methods to infer the […]

The impact of COVID-19 on small and medium enterprises

Author: Edoardo Giovannini, Research Analyst Intern, FNA In the context of the coronavirus pandemic (COVID-19), it is important that central banks monitor credit risk with new methods. Understanding how credit risk will move during the next few years is crucial to prevent non-performing-loans (NPL) from negatively affecting the financial system or the real economy. Recently, […]
Copyright FNA © 2021 | Privacy Policy