Bots, chatbots, and their associated technologies are all the rage these days. Many institutions and businesses have begun to use bots to supplement and enhance their operations. Financial institutions have been increasingly integrating this technology into their practices to great success. As beneficial as this technology can be, it nonetheless poses unique regulatory and compliance challenges, particularly within the realm of consumer finance. This article serves as a primer on the basics of bot technology as used by financial institutions and the implication of such use with federal consumer finance laws.

What is a Bot?

Bots are computer programs that use various systems to perform tasks such as simulating human conversation (chatbots) or automating functions previously performed by humans. Broadly speaking, there are two types of bots that are utilized by financial institutions: (1) bots that operate within a rule-based framework and (2) those that utilize an artificial intelligence (AI) paradigm.

Rule Based: Bots that utilize a rule-based system architecture follow a predefined series of rules and can only respond with fixed outputs that have been hardcoded into them. They are often cheaper and easier to implement into existing business infrastructure. Financial institutions might utilize this type of bot as a “FAQ” type of resource or as a funnel to facilitate the transfer of customers to the appropriate human agents.

Artificial Intelligence: Bots of this categorization utilize a variety of sophisticated algorithms such as machine learning, deep learning, and advanced natural language processing to learn, adapt, and improve based on new informational inputs. These types of bots are often employed by large financial institutions that collect and process a large amount of data. However, due to the complexity and sometimes opaqueness of these systems, they often pose greater regulatory and compliance risks.

The Function of Bots in Financial Institutions

Financial institutions might use bots to supplement or control a variety of activities, some such activities include:

Customer Service: A chatbot may be employed to answer customer questions or direct them to the relevant resources or people.

Underwriting and Credit Scoring: Bots can be used to analyze large amounts of data to evaluate and predict the creditworthiness of individuals to help financial institutions in making credit-based decisions.

Marketing: Bots can identify, target, and engage prospective customers that are likely to seek a product or service from a financial institution.

Risk Management and Loan Servicing: Financial institutions can utilize bots for credit monitoring, payment analysis, collection automation, loan restructuring/recovery, and loss forecasting.

Regulatory and Compliance Considerations

Cognizant of the increasing prevalence of bot technology being used in financial institutions, governmental regulatory agencies have looked to address potential violations their use could have with consumer finance laws. On March 31, 2021, five federal regulatory agencies issued a request for comment and information (RFI) regarding financial institutions’ use of artificial intelligence, including machine learning. The RFI identified the following consumer protection laws and regulations that may be impacted by financial institutions’ use of bot technology:

· Fair Credit Reporting Act (FCRA)/Reg. V

· Equal Credit Opportunity Act (ECOA)/Reg. B

· Fair Housing Act (FHA)

· Section 5 of the Federal Trade Commission Act (prohibiting UDAP)

· Sections 1031 and 1036 of the Dodd-Frank Act (prohibiting unfair, deceptive, or abusive acts or practices (UDAAP))

Managing Potential Risks: Financial institutions should have processes in place for identifying and managing the risks associated with their use of bots. This includes being able to accurately explain the systems and processes that a bot employs to achieve its purpose. When a bot is used to evaluate consumers for credit scoring or marketing purposes, a financial institution should be able to document the legitimate reasons and factors that the bot weighs in making its determination. If a financial institution decides to use data that may present disparate impact risk, the institution should document the detailed business justifications for using the particular type of data and the business reason for not using an alternative. Bots employed by financial institutions should be diligently monitored, tested, and documented to ensure that any algorithmic processes utilized are not leading to decisions that adversely affect or excludes certain classes or segments of the population without a legitimate reason.

Additionally, the role and use of bots in loan origination, adverse action notices, responses to requests for information, and record retention should be critically evaluated to ensure compliance with legal requirements. Bots, such as chatbots, that directly interact and communicate with consumers should be monitored to prevent unfair, deceptive, and abusive acts or practices (UDAAPs), by being transparent, unaggressive, and presenting options clearly in a manner that is unlikely to mislead consumers.

Ultimately, compliance and risks associated with the use of bots will vary between financial institutions. Factors such as the size of the organization, type of data employed, and type of bot used, means there is no one-size fits all policy. Both large and small financial institutions stand to benefit from the use of bots in their business practice, but implementation should be done in conjunction with the proper consultation and due diligence of the legal risks.

This article is for information purposes only and is not intended to constitute legal advice.