The rise of fintech services and cryptocurrencies has changed modern banking in several ways, and banks face an increasing number of challenges as various third-party payment processors come between financial institutions and their traditional customers. Credit scoring systems widely used in the United States and Europe are based on so-called “hard” information: bill payments, pay stubs, and how much of your current credit limit you are using.
The researchers point out that so-called “hard” credit scores have two important problems. First, banks tend to reduce the availability of credit during a recession – when people need help the most. Second, it can be difficult for businesses and individuals without a credit history to start building one. There is a bit of a trap in the system, in that what you need to persuade an institution to lend you money is a credit history that you don’t have because no one will lend you money. ‘money.
Having identified two flaws in the existing system, the authors write:
The rise of the Internet enables the use of new types of non-financial customer data, such as browsing histories and the online shopping behavior of individuals, or customer reviews for online sellers.
The literature suggests that this non-financial data is valuable for financial decision making. Berg et al. (2019) show that easy-to-collect information such as ‘digital footprint’ (email provider, mobile operator, operating system, etc.) works just as well as traditional credit scores in assessing risk of the borrower. In addition, there are complementarities between financial and non-financial data: the combination of credit scores and digital footprint further improves default predictions. Consequently, the incorporation of non-financial data can lead to significant efficiency gains in financial intermediation.
In a blog post posted on the IMF website, the authors also write: “Recent research papers that, when powered by artificial intelligence and machine learning, these alternative data sources are often superior to methods traditional credit scoring.
While the authors of this article are familiar with banking systems and finance, they are clearly unaware of the latest research in AI. It’s a bad idea in general, but it’s a really terrible idea right now.
The first major problem with this proposal is that there is no evidence that AI is capable of this task or that it will be anytime soon. In an interview with The Guardian in early summer, Microsoft AI researcher Kate Crawford made harsh remarks about the current reality of artificial intelligence, despite working for one of the leaders of the domain: “AI is neither artificial nor intelligent. It is made from natural resources and it is the people who carry out the tasks to make the systems seem self-sufficient. “
Asked about the specific problem of bias in AI, Crawford said:
Time and time again we see these systems go wrong – women were offering less credit through credit algorithms, mislabeled black faces – and the response was, “We just need more data. But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when the systems are applied, but in the way they are built and shaped to view the world. Training data sets used for machine learning software that categorizes people into only one of two genders; who classify people according to their skin color in one of the five racial categories, and who attempt, based on the appearance of people, to assign a moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately classification politics has become entrenched in the substrates of AI.
It is not just the opinion of one person. Gartner previously predicted that 85% of AI projects through 2022 “will produce erroneous results due to bias in the data, algorithms, or the teams responsible for managing them.” A recent Twitter hackathon found evidence that the website’s photo-cropping algorithm was implicitly biased against the elderly, the disabled, blacks, and Muslims, and it frequently cut them out of the photographs. Twitter has since stopped using the algorithm because these kinds of bias issues are not in anyone’s best interest.
While my own research is far from fintech, I’ve spent the last 18 months experimenting with AI-powered scaling tools, as regular ExtremeTech readers know. I have used Topaz Video Enhance AI a lot and have also experimented with other neural networks. While these tools are capable of making remarkable improvements, this is a rare video that can simply be popped into TVEAI in the hope that the gold will come to the other side.
Here is image 8829 of the Star Trek: Deep Space Nine Episode “Defiant”. The quality of the framing is reasonable considering the starting point of the source, but we have a glaring error against Jadzia Dax. This is the release of a single model and I am mixing the output of multiple models to improve the early seasons of DS9. In this case, all of the models I had tried were showing up in this scene in one way or another. I am showing the medium quality Artemis output in this case.
This specific distortion occurs once in the entire episode. Most Topaz models (and all non-Topaz models I have tested) have had this issue and it has proven resistant to repair. There are not many pixels representing his face, and the original MPEG-2 quality is poor. There isn’t yet an AI model that correctly handles an entire episode from S1 to S3 that I have found, but this is by far the worst distortion of the entire episode. It is also only on screen a few seconds before she moves and the situation improves.
The best repair output I’ve managed looks like this, using TVEAI’s Proteus model:
There’s a reason I use video editing to talk about fintech issues: AI is still far from perfect, in any field of study. The above “fix” is flawed, but it took hours of careful testing to achieve it. Behind the scenes of what various companies smugly call “AI”, there are a lot of humans doing an awful lot of work. That’s not to say there isn’t real progress, but these systems are nowhere near as foolproof as the cycle of hype has made them believe.
Right now, we are at a point where applications can produce amazing results, even to the point of making real scientific discoveries. Humans, however, are still deeply involved in every step of the process. Even then, there are mistakes. Correcting this particular error requires substituting the output of an entirely different template for the duration of that scene. If I hadn’t watched the episode carefully, maybe I would have missed the problem completely. AI has a similar problem in general. Companies that battled biases in their AI networks had no intention of putting it there. It was created due to bias in the underlying data sets themselves. And the problem with these datasets is that if you don’t take a good look at them, you might end up thinking that your output is all made up of images like the below, as opposed to the damage scene above:
Even though the AI component of this equation was poised to build on, privacy concerns are another major concern. Companies may be experimenting with tracking various aspects of “soft” consumer behavior, but the idea of linking your credit score to your web history is very similar to the social credit score China now assigns to every citizen. In this country, saying the wrong things or visiting the wrong websites can result in family members being denied loans or access to certain social events. Although the system envisioned is not so draconian, it is still a step in the wrong direction.
The United States does not have the legal framework that would be necessary to deploy a credit monitoring system like this. Any bank or financial institution that wishes to use AI to make decisions about candidate creditworthiness based on their browser and purchase history should be regularly audited to detect any bias against any group. . Researchers who wrote this paper for the IMF talk about vacuuming people’s purchase history without considering that many people use the internet to buy things they are too embarrassed to walk into a store. and buy. Who decides which stores and suppliers matter and which don’t? Who monitors the data to ensure that extremely embarrassing information is not disclosed, either on purpose or by hackers in general?
The fact that non-bank financial institutions want to use some of this data (or already use it) is not a reason to allow it. This is a reason to stay as far away from these organizations as possible. AI is not ready for this. Our privacy laws are not ready for this. The consistent message from reputable and sober researchers working in the field is that we are far from ready to turn these vital considerations into a black box. The authors who wrote this article may be absolute wizards of banking, but their optimism about the near-term state of AI networks is misplaced.
Few things are more important in modern life than credit and financial history, and that’s reason enough to move exceptionally slowly when it comes to AI. Give it a decade or two and check back then, or we’ll spend the next few decades cleaning up the injustices inflicted on various people literally through no fault of their own.