
Artificial intelligence has been hailed as a tool to spot fake applications, incorrect financials and made-up resumes, assisting multifamily owner-operators with their tenant selections.
However, these systems can also flag legitimate tenants as undesirable. Findigs’ Co-Founder and CEO, Steve Carroll, recently shared insights with Connect CRE about AI biases, incorrect responses and how owner-operators can work around them.

Q. What signs should owner/operators look for that their AI system might be flagging legitimate applicants, rather than rooting out fraud?
A.Here are some obvious signs: approval rates that fluctuate out of line with industry standards, particularly on certain income streams or regions; the frequency of applicant disputes or appeals increasing each month; disputes that are common among protected classes, such as self-employed income or unconventional documentation.
A single sign is enough for investigation. Two signs signal an issue. Three signs mean your system is making decisions you don’t want to make.
By the way, a human check shouldn’t mean an automatic seal of approval. The purpose isn’t to question decisions made by an AI model. But humans do need to watch the trends. This can be accomplished by setting up an exception-based system that audits a random sample each month.
Q. What steps should owner/operators take to avoid potential fair housing issues resulting from AI bias or aggressive screening tools?
A.Any AI built on past screening records will carry forward biases. For instance, indirect proxies, such as zip codes, sources of income, or types of employment, can be strongly associated with certain demographic categories even in the absence of any direct mention of race, national origin, or familial status.
Because of this, operators should work with vendors to obtain explanations about AI decision-making and to suggest impact testing against real portfolio outcomes. Vendors should also be willing to publish their approval rate data or submit to third-party audits.
Q. What are some specific examples that have occurred when a screening system failed? What were the consequences?
A.The industry is already seeing ramifications. HUD’s May 2024 guidance states that disparate impact liability under the Fair Housing Act could apply to artificial intelligence used in tenant screening.
The case of Louis et al. v. SafeRent et al. addressed this issue. The class-action lawsuit alleged that SafeRent’s algorithmic tenant screening program unfairly discriminated against minorities and those with housing vouchers. A settlement agreement was reached, with SafeRent paying $2.275 million to the plaintiffs.
Q. So, what recourse do applicants have if they feel they’ve been flagged by mistake?
A.Per the Fair Credit Reporting Act, any individual who has been refused admission because of information provided by a report must receive an adverse action notice as well as the right to dispute it.
Still, the standard needs to be higher than that. Individuals should receive an explanation in simple language, instructions for submitting more documentation, and an opportunity for reconsideration.
Q. Are owner/operators demanding more accountability/guarantees from screening platforms?
A.Absolutely, and it’s the most significant change I’ve witnessed in this sector in a long while. The operators have had enough of vendors who are proud of their successes but blame others when things go wrong.
For example, Findigs was the first provider to introduce a contractual fraud guarantee for any approved fraudulent applications. If we say yes and get it wrong, we’ll take the financial hit together. That sort of terminology is making its way into RFPs.
Q. As automated screening becomes more common, how will legal standards around due diligence change?
A.It’s already happening. As I mentioned earlier, the 2024 HUD guidance made the Fair Housing Act’s disparate impact standard applicable to artificial intelligence-based screening software, making both providers and users liable under it.
Additionally, private litigation is proving to be powerful.
I already mentioned the 2024 SafeRent class-action lawsuit. Also, TransUnion found itself in discussions with federal agencies over alleged compliance issues related to its tenant screening reports. In 2023, TransUnion agreed to pay $15 million to settle claims issued by the Federal Trade Commission and Consumer Financial Protection Bureau.
Additionally, Colorado’s Artificial Intelligence Act (SB24-205) addresses “high-risk” AI systems to prevent algorithmic discrimination. The act will take effect in June and requires developers to disclose known discrimination risks and their risk management policies. It also specifies that deployers should conduct annual impact system assessments and tell consumers when a high-risk AI system is used to make significant decisions.
Q. What else should owner/operators understand about using AI to prevent renter fraud?
A.Preventing fraud is one side of the coin. The other is ensuring that your approved renters actually pay their rent. We’ve been living in an industry where optimization has focused on rejecting more applications than approving them. But there hasn’t been enough attention paid to ensuring revenue quality. This puts rental operators into a trade-off they shouldn’t be in.
An earlier version of this article appeared on ApartmentBuildings.com.
The post AI’s Dark Side: Flagging Legitimate Multifamily Renter Applicants appeared first on Connect CRE.