Category Archives: World

Brave Integrates IPFS to Enable Users to Seamlessly Browse The Decentralized Web

Advancing the transition to a decentralized Web, IPFS integration on Brave’s desktop browser increases content availability and Internet resilience 

IPFS, the peer-to-peer hypermedia protocol designed to make the Web faster, safer, and more open, has been integrated into Brave, the fast, privacy-oriented browser, reinventing the Web for users, publishers and advertisers. 

Incorporated into today’s Brave desktop browser update (version 1.19), Brave’s 24 million monthly active users can now access content directly from IPFS by resolving ipfs:// URIs via a gateway or installing a full IPFS node in one click. When installing a full node, this will allow Brave users to load content over IPFS’ p2p network, hosted on their own node.  Integrating IPFS provides Brave users with a significantly enhanced browsing experience, increasing the availability of content, offloading server costs from the content publisher, and improving the overall resilience of the Internet.

Molly Mackinlay, Project Lead at IPFS said, Bringing the benefits of the dWeb to Brave users, IPFS’ efforts to remove systemic data censorship by corporations and nation-states are now strengthened through the integration with Brave. Today, Web users across the world are unable to access restricted content, including, for example, parts of Wikipedia in Thailand, over 100,000 blocked websites in Turkey, and critical access to COVID-19 information in China. Now anyone with an internet connection can access this critical information through IPFS on the Brave browser.”

In a further aspect of the integration, projects building on IPFS such as app development platforms, Textile, and Fleek, will automatically enable anyone to deploy a website or dApp accessible on Brave.

Brian Bondy, CTO and co-founder of Brave, said, “We’re thrilled to be the first browser to offer a native IPFS integration with today’s Brave desktop browser release. Providing Brave’s 1 million+ verified content creators with the power to seamlessly serve content to millions of new users across the globe via a new and secure protocol, IPFS gives users a solution to the problem of centralized servers creating a central point of failure for content access. IPFS’ innovative content addressing uses Content Identifiers (CIDs) to form an address based on the content itself as opposed to locating data based on the address of a server. Integrating the IPFS open-source network is a key milestone in making the Web more transparent, decentralized, and resilient.” 

With a budding community of over four thousand IPFS contributors around the world, this is the initial implementation of IPFS on Brave. Striving to give users full control of their online experience, future collaborations will facilitate automatic redirects from DNSLink websites to the native IPFS version, the ability to “co-host” a website, features to easily publish to IPFS, and much more.

About IPFS

IPFS is a peer-to-peer network and protocol designed to make the web faster, safer, and more open. IPFS upgrades the web to work peer to peer, addressing data by what it is instead of where it’s located on the network, or who is hosting it.

About Brave

Brave Software’s fast, privacy-oriented browser, combined with its blockchain-based digital advertising platform, is reinventing the Web for users, publishers, and advertisers. Users get a private, speedier web experience with much longer battery life, publishers increase their revenue share, and advertisers achieve better conversion. Users can opt into privacy-respecting ads that reward them with a frequent flyer-like token they can redeem or use to tip or contribute to publishers and other content creators. The Brave solution is a win-win for everyone who has a stake in the open Web and who is weary of giving up privacy and revenue to the ad-tech intermediaries. Brave currently has over 24 million monthly active users and over 1 million Verified Publishers. Brave Software was co-founded by Brendan Eich, creator of JavaScript and co-founder of Mozilla (Firefox), and Brian Bondy, formerly of Khan Academy and Mozilla.

MAS Enhances Guidelines to Combat Heightened Cyber Risks

The Monetary Authority of Singapore (MAS) today issued revised Technology Risk Management Guidelines  (578.7 KB) (Guidelines) to keep pace with emerging technologies and shifts in the cyber threat landscape.

2     The revised Guidelines focus on addressing technology and cyber risks in an environment of growing use by financial institutions (FIs) of cloud technologies, application programming interfaces, and rapid software development. The Guidelines reinforce the importance of incorporating security controls as part of FIs’ technology development and delivery lifecycle, as well as in the deployment of emerging technologies. 

3     The recent spate of cyber attacks on supply chains, which targeted multiple IT service providers through the exploitation of widely-used network management software, is a clear indication of a worsening cyber threat environment. The revised Guidelines set out the following enhanced risk mitigation strategies for FIs –

  • to establish a robust process for the timely analysis and sharing of cyber threat intelligence within the financial ecosystem; and
  • to conduct cyber exercises to allow FIs to stress test their cyber defences by simulating the attack tactics, techniques, and procedures used by real-world attackers.

4     In light of FIs’ growing reliance on third party service providers, the revised Guidelines set out the expectation for FIs to exercise strong oversight of arrangements with third party service providers, to ensure system resilience as well as maintain data confidentiality and integrity.

5     The revised Guidelines provide additional guidance on the roles and responsibilities of the board of directors and senior management –

  • the board and senior management should ensure that a Chief Information Officer and a Chief Information Security Officer, with the requisite experience and expertise, are appointed and accountable for managing technology and cyber risks; and
  • the board should include members with the relevant knowledge to provide effective oversight of technology and cyber risks.

6     The revised Guidelines have incorporated feedback received from the public consultation  (728.4 KB) conducted in 2019, MAS’ engagement with the industry, and MAS’ Cyber Security Advisory Panel (CSAP). [1]  MAS thanks all respondents for the invaluable suggestions in shaping the Guidelines.

7     Mr Tan Yeow Seng, Chief Cyber Security Officer, MAS, said, “Technology now underpins most aspects of financial services. Not only are financial institutions adopting new technologies, they are also increasingly reliant on third party service providers. The revised Guidelines set out MAS’ higher expectations in the areas of technology risk governance and security controls in financial institutions.”

***

Additional information

The Technology Risk Management Guidelines are a set of best practices that provide FIs with guidance on the oversight of technology risk management, practices and controls to address technology and cyber risks. MAS expects FIs to observe the guidelines as this will be considered in MAS’ risk assessment of the FIs.

The Guidelines should be read with the Notice on Technology Risk Management and Notice on Cyber Hygiene.

  1. [1] The CSAP, which was formed in 2017, comprises leading cyber security experts and thought leaders from around the world. The panel advises MAS on strategies to enhance cyber resilience in the financial system.

Reserve Bank of New Zealand committed to action as it responds to data breach

The Governor of the Reserve Bank of New Zealand, Adrian Orr, says the recent malicious and illegal breach of a file sharing application used by the Bank is significant, and has our full attention.

Mr Orr says New Zealand’s financial system and institutions remain sound, and Te Pūtea Matua is open for business. The standalone File Transfer Application system that was breached has been secured and closed.

“We apologise unreservedly to all of those impacted by the breach. Personally, I own this issue and I am disappointed and sorry,” Mr Orr says.

“Our investigation makes it clear we are dealing with a significant data breach. While a malicious third party has committed the crime, and we believe service provisions have fallen short of our agreement, the Bank has also fallen short of the standards expected by our stakeholders.”

A detailed forensic cyber investigation is underway and RBNZ is working directly with affected stakeholders whose information may have been breached.

“We recognise the public interest in this incident and we acknowledge there are serious questions that need to be answered about how this incident occurred and how to strengthen our systems and processes,” says Mr Orr.

“In addition to the forensic cyber investigation currently underway, we have appointed an independent third party to undertake a comprehensive general review of this incident. We will be as transparent and clear as possible as this progresses, and will release the review’s terms of reference shortly.”

“Our immediate focus is on working directly with system users and those who may have had their information compromised. It is a complex process and accuracy and security are important. As our investigations progress, we are prioritising direct engagement with institutions and individuals affected. We thank stakeholders for their patience and understanding.

“Be assured, we are taking action. We are working closely with public authorities and utilising international experts as we respond. We are doing so in a whole of Government framework, utilising the National Security System.”

“We are not in a position to provide further details on the investigation at this time as it could adversely affect the investigation and the steps being taken to mitigate the breach,” says Mr Orr.

Ongoing updates on the investigation process will be provided via the Reserve Bank Data Breach Response page, and email service.

The final countdown: completing sterling LIBOR transition by end-2021

After many years of preparation, 2021 is the critical year for firms to complete their transition away from LIBOR.

The LIBOR administrator, ICE Benchmark Administration, is consultingOpens in a new window on ceasing publication of all sterling LIBOR settings at the end of 2021, leaving just one year for firms to remove their remaining reliance on these benchmarks.

This issue touches numerous parts of the economy. LIBOR has been embedded in the financial system for many years, used to calculate interest in everything from corporate borrowing and intra-group transfers, to complex derivatives. It is also utilised in accounting practices, system infrastructure and other supporting functions. All of these will need to be ready to use alternative reference rates, such as SONIA, by the end of this year.

The Bank of England and the Financial Conduct Authority (FCA) have set out clear expectations for regulated firms to remove their reliance on LIBOR in all new business and in legacy contracts, where feasible. The primary way for market participants to have certainty over the economic terms of their contracts is to actively transition them away from LIBOR.

In support of this, the Working Group on Sterling Risk-Free Reference Rates (the Working Group) has published an update to its priorities and roadmapOpens in a new window for the final year of transition to help businesses to finish planning the steps they will need to take in the coming months.

The Working Group’s top priority is for markets and their users to be fully prepared for the end of sterling LIBOR by the end of 2021. In particular the Working Group has recommended that, from the end of March 2021, sterling LIBOR is no longer used in any new lending or other cash products that mature after the end of 2021. All businesses with existing loans in sterling should already have heard from their lenders about the transition, and those seeking a new or refinanced loan today should be offered a non-LIBOR alternative. Throughout the remainder of the year, existing contracts linked to sterling LIBOR should be actively transitioned where possible.

In addition, the Working Group has recommended that firms no longer initiate new linear derivatives linked to sterling LIBOR after the end of March 2021, other than for risk management of existing positions or where they mature before the end of 2021.

The Working Group, the Bank of England, and the FCA have made clear that, in future, they anticipate that the large majority of sterling markets will be based on SONIA compounded in arrears, to provide the most robust foundation for the overall market structure. However, in certain specific parts of the market, participants may need access to alternative rates. In this context, the Working Group welcomes the development of term SONIA reference rates (TSRRs) which are beginning to be made available by various providers. Alongside this, the Working Group has engaged closely with the FICC Markets Standards Board (FMSB) to support development of a market standard for appropriately limited use of TSRRs, consistent with the Working Group’s objectives and existing recommendations on use cases of benchmark ratesOpens in a new window. The proposed FMSB standard is under review by key stakeholders during January and is expected to be released for public comment in February.

The Bank of England and the FCA continue to work closely with firms to secure a smooth transition. In particular, supervisors of regulated firms will continue to expect transition plans to be executed in line with industry-recommended timelines across sterling and other LIBOR currencies. Senior managers with responsibility for the transition should expect close supervisory engagement on how they are ensuring their firm’s progress relative to industry milestones.

Tushar Morzaria, Chair, Working Group on Sterling Risk-Free Reference Rates, commented: “In line with the Working Group’s milestones for Q3 2020, lenders should now be in a position to offer loans based on SONIA or other LIBOR alternatives. I encourage all end users to engage with their lenders and trade associations as early as possible to ensure a smooth transition.”

Andrew Hauser, Executive Director for Markets at the Bank of England commented: “As we move into the final year for sterling LIBOR transition, it is crucial that firms take action now to make certain they are prepared well in advance of the end of 2021.”

Edwin Schooling Latter, Director of Markets and Wholesale Policy at the FCA, commented: “The end-game for LIBOR is now increasingly clear. Firms should now have everything they need to shift new business to SONIA and to complete their plans for transition of legacy exposures. There is no longer any reason for delay.”

Supporting Responsible Use of AI and Equitable Outcomes in Financial Services

Governor Lael Brainard

At the AI Academic Symposium hosted by the Board of Governors of the Federal Reserve System, Washington, D.C. (Virtual Event)

Today’s symposium on the use of artificial intelligence (AI) in financial services is part of the Federal Reserve’s broader effort to understand AI’s application to financial services, assess methods for managing risks arising from this technology, and determine where banking regulators can support responsible use of AI and equitable outcomes by improving supervisory clarity.1

The potential scope of AI applications is wide ranging. For instance, researchers are turning to AI to help analyze climate change, one of the central challenges of our time. With nonlinearities and tipping points, climate change is highly complex, and quantification for risk assessments requires the analysis of vast amounts of data, a task for which the AI field of machine learning is particularly well-suited.2 The journal Nature recently reported the development of an AI network which could “vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery” by accurately predicting a protein’s 3-D shape from its amino acid sequence.3

Application of AI in Financial Services

In November 2018, I shared some early observations on the use of AI in financial services.4 Since then, the technology has advanced rapidly, and its potential implications have come into sharper focus. Financial firms are using or starting to use AI for operational risk management as well as for customer-facing applications. Interest is growing in AI to prevent fraud and increase security. Every year, consumers bear significant losses from frauds such as identity theft and imposter scams. According to the Federal Trade Commission, in 2019 alone, “people reported losing more than $1.9 billion to fraud,” which represents a mere fraction of all fraudulent activity banks encounter.5 AI-based tools may play an important role in monitoring, detecting, and preventing such fraud, particularly as financial services become more digitized and shift to web-based platforms. Machine learning-based fraud detection tools have the potential to parse through troves of data—both structured and unstructured—to identify suspicious activity with greater accuracy and speed, and potentially enable firms to respond in real time.

Machine learning models are being used to analyze traditional and alternative data in the areas of credit decisionmaking and credit risk analysis, in order to gain insights that may not be available from traditional credit assessment methods and to evaluate the creditworthiness of consumers who may lack traditional credit histories.6 The Consumer Financial Protection Bureau has found that approximately 26 million Americans are credit invisible, which means that they do not have a credit record, and another 19.4 million do not have sufficient recent credit data to generate a credit score. Black and Hispanic consumers are notably more likely to be credit invisible or to have an unscored record than White consumers.7 The Federal Reserve’s Federal Advisory Council, which includes a range of banking institutions from across the country, recently noted that nontraditional data and the application of AI have the potential “to improve the accuracy and fairness of credit decisions while also improving overall credit availability.”8

To harness the promise of machine learning to expand access to credit, especially to underserved consumers and businesses that may lack traditional credit histories, it is important to be keenly alert to potential risks around bias and inequitable outcomes. For example, if AI models are built on historical data that reflect racial bias or are optimized to replicate past decisions that may reflect bias, the models may amplify rather than ameliorate racial gaps in access to credit. Along those same lines, the opaque and complex data interactions relied upon by AI could result in discrimination by race, or even lead to digital redlining, if not intentionally designed to address this risk. It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes. As Rayid Ghani notes, “…[A]ny AI (or otherwise developed) system that is affecting people’s lives has to be explicitly built to focus on increasing equity and not just optimizing for efficiency…[W]e need to make sure that we put guidelines in place to maximize the chances of the positive impact while protecting people who have been traditionally marginalized in society and may be affected negatively by the new AI systems.”9

Black Box Problems

Recognizing the potential and the pitfalls of AI, let us turn to one of the central challenges to using AI in financial services—the lack of model transparency. Some of the more complex machine learning models, such as certain neural networks, operate at a level of complexity that offers limited or no insight into how the model works. This is often referred to as the “black box problem,” because we can observe the inputs the models take in, and examine the predictions or classifications the model makes based on those inputs, but the process for getting from inputs to outputs is obscured from view or very hard to understand.

There are generally two reasons machine learning models tend toward opacity. The first is that an algorithm rather than a human being “builds” the model. Developers write the initial algorithm and feed it with the relevant data, but do not specify how to solve the problem at hand. The algorithm uses the input data to estimate a potentially complex model specification, which in turn make predictions or classifications. As Michael Tyka puts it, “[t]he problem is that the knowledge gets baked into the network, rather than into us. Have we really understood anything? Not really—the network has.”10 This is somewhat different from traditional econometric or other statistical models, which are designed and specified by humans.

The second is that some machine learning models can take into account more complex nonlinear interactions than most traditional models in ways that human beings would likely not be able to identify on their own.11 The ability to identify subtle and complex patterns is what makes machine learning such a powerful tool, but that complexity often makes the model inscrutable and unintuitive. Hod Lipson likens it to “meeting an intelligent species whose eyes have receptors [not] just for the primary colors red, green, and blue, but also for a fourth color. It would be very difficult for humans to understand how the alien sees the world, and for the alien to explain it to us.”12

The Importance of Context

While the black box problem is formidable, it is not, in many cases, insurmountable. The AI research community has made notable strides in explaining complex machine learning models—indeed, some of our symposium panelists have made major contributions to that effort. One important conclusion of that work is that there need not be a single principle or one-size-fits-all approach for explaining machine learning models. Explanations serve a variety of purposes, and what makes a good explanation depends on the context. In particular, for an explanation to “solve” the black box problem, it must take into account who is asking the question and what the model is predicting.

So what do banks need from machine learning explanations? The requisite level and type of explainability will depend, in part, on the role of the individual using the model. The bank employees that interact with machine learning models will naturally have varying roles and varying levels of technical knowledge. An explanation that requires the knowledge of a PhD in math or computer science may be suitable for model developers, but may be of little use to a compliance officer, who is responsible for overseeing risk management across a wide swath of bank operations.

The level and type of explainability also depends on the model’s use. In the consumer protection context, consumers’ needs and fairness may define the parameters of the explanation. Importantly, consumer protection laws require lenders who decline to offer a consumer credit—or offer credit on materially worse terms than offered to others—to provide the consumer with an explanation of the reasons for the decision. That explanation serves the important purposes of helping the consumer to understand the basis of the determination as well as the steps the consumer could take to improve his or her credit profile.13

Additionally, to ensure that the model comports with fair lending laws that prohibit discrimination, as well as the prohibition against unfair or deceptive practices, firms need to understand the basis on which a machine learning model determines creditworthiness. Unfortunately, we have seen the potential for AI models to operate in unanticipated ways and reflect or amplify bias in society. There have been several reported instances of AI models perpetuating biases in areas ranging from lending and hiring to facial recognition and even healthcare. For example, a 2019 study by Science revealed that an AI risk-prediction model used by the U.S. healthcare system was fraught with racial bias. The model, designed to identify patients that would likely need high-risk care management in the future, used patients’ historical medical spending to determine future levels of medical needs. However, the historical spending data did not serve as a fair proxy, because “less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.”14 Thus, it is critical to be vigilant for the racial and other biases that may be embedded in data sources.

It is also possible for the complex data interactions that are emblematic of AI—a key strength when properly managed—to create proxies for race or other protected characteristics, leading to biased algorithms that discriminate. For example, when consumers obtain information about credit products online, the complex algorithms that target ads based on vast amounts of data, such as where one went to school, consumer likes, and online browsing habits, may be combined in ways that indicate race, gender, and other protected characteristics.15 Even after one online platform implemented new safeguards pursuant to a settlement to address the potential exclusion of consumers from seeing ads for credit products based on race, gender, or other protected characteristics, Professor Alan Mislove and his collaborators have found that the complex algorithms may still result in bias and exclusion.16 Therefore, it is important to understand how complex data interactions may skew the outcomes of algorithms in ways that undermine fairness and transparency.

Makada Henry-Nickie, notes that “…[I]t is of paramount importance that policymakers, regulators, financial institutions, and technologists critically examine the benefits, risks, and limitations of AI and proactively design safeguards against algorithmic harm, in keeping with societal standards, expectations, and legal protections.”17 I am pleased that the symposium includes talks from scholars who are studying how we can design AI models that avoid bias and promote financial inclusion. No doubt everyone here today who is exploring AI wants to promote financial inclusion and more equitable outcomes and ensure that it complies with fair lending and other laws designed to protect consumers.

In the safety and soundness context, bank management needs to be able to rely on models’ predictions and classifications to manage risk. They need to have confidence that a model used for crucial tasks such as anticipating liquidity needs or trading opportunities is robust and will not suddenly become erratic. For example, they need to be sure that the model would not make grossly inaccurate predictions when it confronts inputs from the real world either that differ in some subtle way from the training data or that are based on a highly complex interaction of the data features. In short, they need to be able to have confidence that their models are robust. Explanations can be an important tool in providing that confidence.

Not all contexts require the same level of understanding of how machine learning models work. Users may, for example, have a much greater tolerance for opacity in a model that is used as a “challenger” to existing models and simply prompts additional questions for a bank employee to consider relative to a model that automatically triggers bank decisions. For instance, in liquidity or credit risk management, where AI may be used to test the outcomes of a traditional model, banks may appropriately opt to use less transparent machine learning systems.

Forms of Explanations

Researchers have developed various approaches to explaining machine learning models. Often, these approaches vary in terms of the type of information they can provide about a model. As banks contemplate using these tools, they should consider what they need to understand about their models relative to the context, in order to determine whether there is sufficient transparency in how the model works to properly manage the risk at issue.

Not all machine learning models are a black box. In fact, some machine learning models are fully “interpretable” and therefore may lend themselves to a broader array of use cases. By “interpretable” I mean that developers can “look under the hood” to see how those models make their predictions or classifications, similar to traditional models. They can examine how much weight the model gives to each data feature, and how it plays into a given result. Interpretable machine learning models are intrinsically explainable.

In the case of machine learning models that are opaque, and not directly interpretable, researchers have developed techniques to probe these models’ decisions based on how they behave. These techniques are often referred to as model agnostic methods, because they can be used on anymodel, regardless of the level of explainability. Model agnostic methods do not access the inner workings of the AI model being explained. Instead, they derive their explanations post hoc based on the model’s behavior: essentially, they vary inputs to the AI model, and analyze how the changes affect the AI model’s outputs.18 In effect, a model agnostic method uses this testing as data to create a model of the AI model.19

While post hoc explanations generated by model agnostic methods can allow inferences to be drawn in certain circumstances, they may not always be accurate or reliable, unlike intrinsic explanations offered by interpretable models. Basing an explanation on a model’s behavior rather than its underlying logic in this way may raise questions about the explanation’s accuracy, as compared to the explanations of interpretable models. Still, such explanations may be suitable in certain contexts. Thus, one of the key questions banks will face is when a post hoc explanation of “black box” model is acceptable versus when an interpretable model is necessary.

To be sure, having an accurate explanation for how a machine learning model works does not by itself guarantee that the model is reliable or fosters financial inclusion. Time and experience are also significant factors in determining whether models are fit to be used. The boom-bust cycle that has defined finance for centuries should make us cautious in relying fully for highly consequential decisions on any models that have not been tested over time or on source data with limited history, even if in the age of big data, these data sets are broad in scope.

Expectations for Banks

Recognizing that AI presents promise and pitfalls, as a banking regulator, the Federal Reserve is committed to supporting banks’ efforts to develop and use AI responsibly to promote a safe, fair, and transparent financial services marketplace. As regulators, we are also exploring and understanding the use of AI and machine learning for supervisory purposes, and therefore, we too need to understand the different forms of explainability tools that are available and their implications. To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks. Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.20

To that end, we are exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI. It is important that we hear from a wide range of stakeholders—including financial services firms, technology companies, consumer advocates, civil rights groups, merchants and other businesses, and the public. The Federal Reserve has been working with the other banking agencies on a possible interagency request for information on the risk management of AI applications in financial services. Today’s symposium serves to introduce a period of seeking input and hearing feedback from a range of external stakeholders on this topic. It is appropriate to be starting with the academic community that has played a central role in developing and scrutinizing AI technologies. I look forward to hearing our distinguished speakers’ insights on how banks and regulators should think about the opportunities and challenges posed by AI.


1. I am grateful to Kavita Jain, Jeff Ernst, Carol Evans, and Molly Mahar of the Federal Reserve Board for their assistance in preparing this text. These remarks represent my own views, which do not necessarily represent those of the Federal Reserve Board or the Federal Open Market Committee. Return to text

2. David Rolnick, et al., “Tackling Climate Change with Machine Learning (PDF),” ; Sarah Castellanos, “Climate Researchers Enlist Big Cloud Providers for Big Data Challenges,” The Wall Street Journal, November 25, 2020, https://www.wsj.com/articles/climate-researchers-enlist-big-cloud-providers-for-big-data-challenges-11606300202. Return to text

3. Ewen Callaway, “‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures,” Nature 588 (November 30, 2020): 203–204, https://www.nature.com/articles/d41586-020-03348-4. Return to text

4. Lael Brainard, “What Are We Learning about Artificial Intelligence in Financial Services? (remarks at Fintech and the New Financial Landscape, Philadelphia, Pennsylvania, November 13, 2018). Return to text

5. Federal Trade Commission, Consumer Sentinel Network, Data Book 2019 (PDF), (Washington: Federal Trade Commission, January 2019). Return to text

6. See Board of Governors of the Federal Reserve System et al., “Interagency Statement on the Use of Alternative Data in Credit Underwriting (PDF).” Return to text

7. Kenneth P. Brevoort, Philipp Grimm, and Michelle Kambara, Data Point: Credit Invisibles (PDF) (Washington: Consumer Financial Protection Bureau, May 2015). Return to text

8. Federal Advisory Council (FAC) Record of Meeting, (December 3, 2020) (PDF)Return to text

9. Rayid Ghani, “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services (PDF)” (testimony before the House Committee on Financial Services Task Force on Artificial Intelligence Hearing on February 12, 2020). Return to text

10. Davide Castelvevchi, “Can we open the black box of AI?” Nature 538(October 5, 2016): 20–23, https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731. Return to text

11. Cynthia Rudin, “Please Stop Explaining Black Box Models for High-Stakes Decisions (PDF)” (paper presented at 32nd Conference on Neural Information Processing Systems, Montreal, Canada, November 2018). Return to text

12. Castelvevchi, “Can we open,” 20–23. Return to text

13. Among other things, the explanation can also make consumers aware of any erroneous information that drove the denial of credit. Return to text

14. Ziad Obermeyer et al., “Dissecting racial bias in an algorithm used to manage the health of populations,” Science 366 (October 25, 2019): 447–453, https://science.sciencemag.org/content/366/6464/447. Return to text

15. Carol A. Evans and Westra Miller, “From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing,” Consumer Compliance Outlook, third issue, 2019. Return to text

16. Piotr Szapiezynski et al., “Algorithms That ‘Don’t See Color’: Comparing Biases in Lookalike and Special Ad Audiences,” (2019), https://sapiezynski.com/papers/sapiezynski2019algorithms.pdf; Till Speicher, et al., “Potential for Discrimination in Online Targeted Advertising (PDF),” Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and Transparency. Return to text

17. Makada Henry-Nickie, “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services (PDF)” (testimony before the House Committee on Financial Services Task Force on Artificial Intelligence Hearing on February 12, 2020). Return to text

18. See Marco Tulio Ribeiro et al., “Model-Agnostic Interpretability of Machine Learning” (presented at 2016ICML Workshop on Human Interpretability in Machine Learning, New York, New York, 2016); Zachary C. Lipton, “The Mythos of Interpretability” (presented at 2016 ICML Workshop on Human Interpretability in Machine Learning, New York, New York, 2016). Return to text

19. See Cynthia Rudin, “Please Stop Explaining” and Christoph Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (Christoph Molnar). Return to text

20. The Federal Reserve’s Model Risk Management guidance (SR 11-7) establishes an expectation that models used in banking are conceptually sound or “fit for purpose.” SR 11-7 instructs that when evaluating a model, supervised institutions should consider the “[t]he design, theory, and logic underlying the model.” The Model Risk Management guidance discusses in detail the tools banks rely on to help establish the soundness of their models, such as back-testing and benchmarking and other outcomes-based tests. Return to text

RBNZ response to illegal breach of data system

The Reserve Bank of New Zealand (RBNZ) – Te Pūtea Matua continues to respond with urgency to a breach of a third party file sharing service used to share information with external stakeholders.

Governor Adrian Orr says the breach is contained, but it will take time to determine the impact. The analysis of the potentially affected information is being done with pace and care.

“We are actively working with domestic and international cyber security experts and other relevant authorities as part of our investigation. This includes the GCSB’s National Cyber Security Centre which has been notified and is providing guidance and advice.”

“We have been advised by the third party provider that this wasn’t a specific attack on the Reserve Bank, and other users of the file sharing application were also compromised.”

“We recognise the public interest in this incident however we are not in a position to provide further details at this time.”

Providing any further details at this early stage could adversely affect the investigation and the steps being taken to mitigate the breach. The Reserve Bank will continue to work with affected stakeholders directly.

“Our core functions and New Zealand’s financial system remain sound, and Te Pūtea Matua is open for business. This includes our markets operations and management of the cash and payments systems.”

We will provide further facts when it is appropriate to do so.

Key details of incident to date:

  • A third party file sharing service provided by Accellion called FTA (File Transfer Application), used by the Bank to share and store some sensitive information, was illegally accessed.
  • The system has been secured and taken offline while investigations are underway.
  • The Bank is communicating with system users about alternative ways to securely share data.
  • Work is continuing to confirm the nature and extent of information that has been potentially accessed. The compromised data may include some commercially and personally sensitive information.

ASIC approves variations to the Banking Code

ASIC has approved variations to the Banking Code of Practice (Code).

The variations, as proposed by the Australian Banking Association (ABA), do the following:

  • Amend the Code’s definition of ‘banking services’ to address an anomaly in the Code’s previous wording that had the unintended result of excluding certain types of small business banking customers who would otherwise meet the Code’s definition of ‘small business’.
  • Make some minor amendments to the Code’s definition of ‘small business’.
  • Extend the application of the Code’s COVID-19 Special Note, which allows for special application of specified Code provisions in light of the extraordinary external environment caused by COVID-19, for a further six months until 1 September 2021.
  • Specify situations in which banks may decline to continue dealing with a representative that a customer in financial difficulty has appointed, if the bank reasonably considers that representative is no longer able to act in the customer’s best interests.
  • Align the Code’s timeframes for responding to complaints with the updated timeframes in ASIC’s Regulatory Guide 271 Internal dispute resolution, which is due to commence on 5 October 2021.

Background

ASIC previously approved the Code, as a whole, in December 2019. That Code commenced on 1 March 2020. On 1 January 2021, as part of the Financial Sector Reform (Hayne Royal Commission Response) Bill 2020, which received Royal Assent on 17 December 2020, a new framework commenced for ASIC’s approval of codes of conduct.

If an application is made to vary an approved code of conduct, ASIC may, by legislative instrument, approve the variation. In the approval, ASIC may identify a provision of the code of conduct as an ‘enforceable code provision’ if ASIC considers that the provision or provisions meet specific legislative criteria.

This approval does not identify any enforceable code provisions. The relatively narrow set of variations are changes to existing Code provisions, and the ABA will be commencing its comprehensive triennial review of the Code later in 2021. The terms of reference for that review will specifically consider the enforceable code provisions framework.

The changes to the small business definition were recommended by Pottinger, the independent firm who reviewed the definition in September and October 2020. The review recommended that those changes be made now and that the more comprehensive changes will be considered as part of the Code’s triennial review.

Veritas Initiative Addresses Implementation Challenges in the Responsible Use of Artificial Intelligence and Data Analytics

The Monetary Authority of Singapore (MAS) today announced the successful conclusion of the first phase of the Veritas initiative which saw the development of the fairness assessment methodology in credit risk scoring and customer marketing. [1] These are the first two use cases to help financial institutions validate the fairness of their Artificial Intelligence and Data Analytics (AIDA) solutions according to the Fairness, Ethics, Accountability and Transparency (FEAT) principles. The Veritas Consortium, [2] comprising MAS and industry partners, also published whitepapers on the fairness assessment methodology and the open source code  of these two use cases.

2     The two whitepapers detailed a five-part methodology to assess the application of the FEAT fairness principles in the two use cases. The methodology addresses the implementation challenges in the responsible use of AIDA, and provides an actionable approach for financial institutions to validate their AIDA solutions. The open source code of the two use cases has been made publicly available to help the wider AIDA community in adopting the fairness assessment methodology and spur industry development. These will benefit customers by improving the fairness of financial services delivered by AIDA systems.

3     This development marks a milestone for the Veritas initiative and paves the way for the next phase of work. Phase Two will look into developing the Ethics, Accountability and Transparency assessment methodology for the two use cases in Phase One. Phase Two will also include use cases for the insurance industry.

4     For the insurance use cases, the Veritas consortium will focus on the fairness assessment methodology for predictive underwriting, and develop the ethics and accountability assessment methodology for fraud detection: 

  • Fairness is a key consideration in the course of underwriting for insurance companies. The Veritas consortium will focus on enhancing the fairness assessment methodology applicable to the predictive underwriting for life and health insurance products.
  • Fraud detection and identification of suspicious customer claims are key activities in claims processing by insurance companies. Traditional fraud detection is resource intensive and insurance companies can employ AIDA to enhance their fraud detection capabilities and efficiency.

5     Sopnendu Mohanty, Chief FinTech Officer, MAS, said, “Veritas Phase One enabled us to look into the fairness of artificial intelligence and data analytics systems in a more granular manner. It will improve the trustworthiness of AIDA significantly. We will continue our Veritas journey and aim to establish Singapore as a responsible artificial intelligence hub for the financial services in the near future.”

***

  1. [1] Veritas, which is a part of Singapore’s National AI Strategy, aims to provide financial institutions with a verifiable way to incorporate the FEAT (Fairness, Ethics, Accountability and Transparency) principles into their AIDA solutions. Please see MAS’ media release on 13 November 2019 for details on Veritas, media release on 12 November 2018 on FEAT and media release on 28 May 2020 for details on the first phase of the Veritas initiative.
  1. [2] The Veritas consortium has 25 members. Please refer to media release on 28 May 2020 for the full list of members for Phase One of Veritas and the annex for the full list of members for Phase Two of Veritas.

Boston Dynamics: The Dancing Robots

Boston Dynamics robots now they can also dance! We have seen them run, open doors and go for a walk but in a new video released by the company the robots dance to the classic hit “Do You Love Me”.

Boston Dynamics’ two robots the Atlas and Spot can do a lot of things like backflips, sprinting, open doors gymnastic routines, parkour, wash dishes, and perform actual jobs.

The new video presents the three Waltham robotics company robots — the humanoid Atlas, the dog-shaped Spot, and the box-juggling Handle — all come together in a coordinated dance set.

The company has recently started selling the Spot robot for the considerable price of $74,500.

The Atlas and Handle robots featured in the video are still research prototypes.

Boston Dynamics: The Dancing Robots

Boston Dynamics was purchased by Hyundai, from SoftBank in a $1.1 billion deal. Boston Dynamics was founded in 1992 in Waltham Ma. as a spin-off from the Massachusetts Institute of Technology (MIT), where it became known for its quadrupedal robots (the DARPA-funded BigDog, a precursor to the company’s first commercial robot, Spot.) It was bought by Alphabet’s X division(GOOGL) in 2013, and then by Softbank in 2017.

Brexit countdown for UK financial services sector

With one week to go until the end of the Brexit transition period, the FCA is urging financial services companies to ensure they are ready. Customers should also be aware of any changes that may apply to them. Irrespective of the outcome of the negotiations between the UK and the EU on a free trade agreement, firms will need to be prepared for the end of the transition period.

The Brexit transition period ends at 11pm on 31 December 2020. After this point EU law will no longer apply in the UK. For many financial services businesses, this will mean changes to existing systems and services. The FCA expects firms to have ensured they have assessed the impact on them and their customers, and have taken action so they are ready for the end of the transition period.

As part of the FCA’s preparations the FCA will be making changes to the Financial Services Register. This is to take account of the end of passporting for firms and funds and the start of the temporary permissions regime (TPR). To make these changes the register will be unavailable from late evening 31 December to early morning 4 January.  

More information about this can be found on the FCA website.

Nausicaa Delfas, Executive Director for International at the Financial Conduct Authority said:

‘With only a week to go, firms should have taken all the necessary steps to prepare for the end of the transition period.  At the FCA we have been working closely with other agencies in the UK and Europe, as well as with businesses, to ensure customers are protected and markets work well.’

Firm preparation

Passporting will no longer be possible after the end of the transition period. The FCA, working with other UK authorities, has introduced the temporary permissions regime (TPR). The TPR will allow EEA-based firms passporting into the UK to continue operating in the UK within the scope of their current permissions for a limited period, while they seek full FCA authorisation, if required. The deadline for EEA firms to notify the FCA they want to enter the TPR closes on 30 December 2020.

The TPR will enable various EEA funds to continue to be marketed in the UK for a limited period provided the fund manager has notified the FCA before the window for notification closes on 30 December 2020.

In addition, the FCA and other UK authorities have also introduced the financial services contracts regime (FSCR). This will allow EEA passporting firms that do not enter the TPR to wind down their UK business in an orderly fashion.

When passporting ends at the end of the transition period, the extent to which UK firms can continue to provide services to customers in the EEA will be dependant on local law and local regulators’ expectations. The FCA expects UK firms to take the steps available to them to ensure they act consistently with these local laws and expectations. The FCA is clear that firms’ decisions need to be guided by obtaining appropriate outcomes for their customers, wherever they are based.

Firms should also be prepared for any regulatory changes that will come into force. To help firms adapt to some of the new rules the Treasury has given the FCA new powers to make transitional provisions, known as the temporary transitional powers (TTP). Whilst the FCA has applied the TTP on a broad basis, there are some key exceptions where firms will need to comply with the changed requirements from the end of the transition period. Firms should check the implications of these for their business.

Systems and operational changes

The FCA continues to urge firms to be fully prepared for the end of the transition period. Whilst much progress has been made on preparations, the FCA recognises the challenges for firms in making the systems and operational changes required. The FCA intends to take a pragmatic approach to any issues should they arise, where firms can demonstrate that they have taken all reasonable steps to prepare.

The FCA’s focus during this time will be on our strategic objective of ensuring markets function well. The FCA will continue to monitor both primary market and associated secondary market activities closely, including for any misconduct by market participants, throughout this period during which some market volatility could arise.

This monitoring will include order book reporting, suspicious order and transacting reporting, inside information disclosures, price movement monitoring, and reporting on net short positions and the FCA will use its powers to request information and make enquiries where behaviour that may be abusive or creating a disorderly market is identified.

The FCA is calling on issuers to be particularly vigilant in ensuring procedures, systems and controls for the protection and disclosure of inside information are met and on market participants to be aware of the FCA’s significant detective capabilities.

Customers

For UK based customers, the FCA has put in place plans to ensure any possible disruption to UK financial services is minimised at the end of the transition period. Nevertheless, there will be some changes.  Customers should have been contacted by their firms if they will be impacted by any of these changes.

Customers living in the EEA could be affected if they have a UK provider who cannot continue to operate in the EEA in the same way after the transition period ends.  Many UK providers are planning to continue providing services. For example, some UK banks plan to continue providing services to customers resident in the EEA. If a bank is no longer able to provide services in the EEA, the FCA expects they should give customers sufficient notice that it plans to close an account. This will allow customers to look for alternative banking arrangements.

At the end of the transition period there may also be changes impacting travel insurance products that cover travel between the UK and the EEA. Customers who are likely to be impacted by this should visit the FCA website for information, and may want to check the position with their travel insurance provider in advance.

UK customers will still be able to make payments and cash withdrawals in the EEA after 31 December 2020, but these may be more expensive and could take longer. From 1 January 2021, banks and payment services providers will also have to provide additional information when making payments between the UK and the EEA, which includes the name and address of the payer and payee. If this information is not provided there could be disruption to payments. If customers have important payments, particularly direct debits, going out of their account to a European company, they may want to check these are going through as normal from 1 January 2021.

At the end of the transition period there will also be changes to some of the financial protections for customers. UK customers of a UK firm will continue to have the same access as currently to the Financial Ombudsman Service and the FSCS.

However, for UK customers of an EEA firm that’s operating in the UK under the TPR, or under the FSCR, protections may be different. Customers of EEA firms should also be aware that these firms will not have been authorised or otherwise assessed by UK regulatory authorities before entering into these temporary regimes.

Customers who are unsure whether they are covered by the FSCS or the Financial Ombudsman Service, should get in touch with the financial services provider to find out, or find out more about the TPR and FSCR.

Customers are advised to speak to their financial services providers now about any concerns, and to check the FCA’s updated information, available at: fca.org.uk/consumers/how-brexit-could-affect-you. Information can also be obtained from the Money Advice Service.