Tag: AVM

AO-41 and the Real Technology Question Appraisers Must Answer

The hands of a real estate professional working at a desk, reviewing pictures and models while taking notes.Judgment, Independence, and the Role of Testing in a Black-Box World

The Appraisal Standards Board’s proposed Advisory Opinion 41 (AO-41), Use of Technology in an Appraisal or Appraisal Review Assignment, has generated thoughtful—and in some cases pointed—discussion across the appraisal and collateral-risk communities. Much of that discussion centers on what AO-41 does not do: it does not define “technology,” it does not distinguish sharply between process tools and product tools, and it does not resolve long-standing tensions in USPAP between established practice and emerging methods. Those critiques are valid. But they also risk missing what AO-41 is really trying to accomplish. In our view, AO-41 is not about endorsing new technology, nor is it about forcing appraisers to become data scientists or software engineers. It is about how appraisers demonstrate professional judgment and competency when technology—especially opaque, third-party technology—becomes unavoidable. That problem is not new. What is new is its scale.

We’ve Seen This Movie Before
Many appraisers will recognize the pattern. When multiple regression analysis (MRA) entered mainstream appraisal education, it was often presented as a way to produce mathematically precise, “market-supported” adjustments. In practice, MRA worked well in some markets and poorly in others. The issue was not regression itself—it was that appraisers were encouraged to use it without sufficient conceptual grounding in when its results were meaningful and when they were not. The result was often false confidence rather than better judgment. AO-41 reflects a similar inflection point—this time driven by AVMs, machine learning, computer vision, and generative AI. The tools are more powerful, more opaque, and far more client-driven than before. But the professional obligation has not changed: only the appraiser produces assignment results.

AVMs, AI, and the Accountability Gap
One criticism raised in recent commentary is that AVMs are not subject to USPAP, are not transparent, and operate based on lender-defined scope and inputs. All of that is true. But it is precisely why AO-41 exists. AO-41 does not attempt to pull AVMs under USPAP. Instead, it forces an uncomfortable but necessary question: What does competent reliance look like when the mechanics of the tool are outside the appraiser’s control? AO-41 answers that question indirectly. It makes clear that appraisers are not required to understand or replicate algorithms—but they are required to understand enough to evaluate relevance, limitations, and credibility for the intended use. That is a judgment problem, not a coding problem.

Independent Testing as a Competency Enabler
This is where the industry conversation needs to mature. For opaque tools, competency cannot reasonably come from inside the model. It must come from external, objective evidence of how the tool behaves. Independent, third-party testing—conducted outside the appraisal assignment—can provide exactly that context:

*   historical accuracy and dispersion,
*   stability across markets, price tiers, and property types,
*   known limitations or failure modes, and
*   awareness of differential performance that may raise fair housing concerns.

Importantly, independent testing does not replace appraisal analysis or judgment. It produces informational evidence, not assignment results. It helps appraisers answer a practical AO-41 question: Is reliance on this tool reasonable here, or should it be limited—or avoided altogether? Or as is our motto here at AVMetrics… “The best thing an AVM can tell you is when NOT to use it”
This framing is fully consistent with AO-41’s core principles and with the Interagency AVM Quality Control Standards, which emphasize ongoing monitoring of AVM accuracy, reliability, and potential bias. Appraisers are not being asked to perform fair lending analysis—but awareness of model behavior across market segments is now inseparable from credibility.

Education, Not Enforcement
Another concern raised in recent commentary is that AO-41 risks merging new tools into old expectations and legacy education. That concern is well taken. In our opinion, USPAP has always struggled to balance encouragement of new methods with deference to established practice. The path forward is not more prescriptive rules. It is better education and clearer boundaries. Appraisers do not need to know how an AVM or AI model works internally. But they should be able to explain, in plain language:

*   why a tool was appropriate (or not) for a specific assignment,
*   how its output was evaluated for reasonableness, and
*   why reliance was full, limited, or declined.

If that explanation cannot be made clearly—“to a sixth grader,” as one educator recently put it—then reliance probably wasn’t appropriate.

What AO-41 Is Really Signaling
AO-41 is not a referendum on technology. It is a signal that the profession needs:

*   clearer educational pathways,
*   shared reference points for evaluating opaque tools, and
*   realistic expectations about what appraisers are—and are not—being asked to understand.

If the exposure process leads to broader recognition that independent testing and education are necessary supports for professional judgment—not substitutes for it—then AO-41 will have served a useful purpose, even as its language continues to evolve. That conversation is exactly what the exposure draft process is meant to surface. And it is one the appraisal and collateral-risk communities should continue—carefully, constructively, and with judgment front and center.

Why AVMetrics’ Fair Housing Methodology Surpasses Vendor Approaches

The Fair Housing analyses published by AVM vendors such as Veros and Clear Capital represent important early efforts to evaluate potential disparate impact in automated valuation models. These studies contribute useful perspective to an evolving area of the industry, but they are inherently constrained by scope, methodology, and—most importantly—objectivity. Their findings are self-assessments rather than independent evaluations: each vendor analyzes only its own model, using its own data and assumptions, and typically concludes that little to no bias exists, which limits their usefulness for broader risk management and supervisory purposes.

Regulated institutions, however, must operate under much more rigorous expectations. The new Interagency AVM Quality Control Standards require lenders to demonstrate that AVMs used in credit decisions are independently validated and fairly applied. This standard cannot be meaningfully satisfied by vendor-authored whitepapers alone.

AVMetrics’ methodology is designed specifically to meet these supervisory needs. Rather than focusing on individual model performance within internally defined samples, AVMetrics conducts standardized, national-level testing across 700,000 to 1 million transactions each quarter. This approach ensures that fairness conclusions reflect real-world market diversity and enables consistent evaluation across models, markets, and time.

AVMetrics independently tests eight different dimensions in which AVMs could potentially disadvantage protected classes, including coverage rates (hit rate), accuracy, precision, and other core performance measures. To support statistically meaningful comparisons, AVMetrics has invested in neighborhood-level demographic data, enabling analysis across comparison neighborhoods- avoiding the masking effects of county-level aggregation while preserving sufficient sample size beyond census-tract granularity.

Further, AVMetrics applies Standardized Mean Difference (SMD)—the same effect-size metric commonly used in fair-lending analytics—providing a clear measure of whether disparities are material, not simply detectable. In contrast, many model-specific analyses typically use raw accuracy differences or simple correlations, which offer no interpretive scale for examiners assessing practical significance. AVMetrics’ approach produces metrics that are grounded in established methodology, interpretable, and defensible.

As the next generation of AVMs incorporates increasingly complex machine learning and generative AI techniques, vendor-driven testing becomes even less transparent. AVMetrics’ methodology is intentionally model-agnostic: we can evaluate the fairness and performance of traditional hedonic models, GBDT-based systems, deep learning models, or hybrid AI architectures with equal rigor. As models become more opaque, the need for a neutral, independent evaluator becomes increasingly essential.

In contrast to analyses intended to provide general assurance around individual models, AVMetrics delivers regulatory-grade evidence. By identifying how model risk and policy risk can interact to generate disproportionate impacts—an expectation embedded in the new regulatory framework—our testing equips lenders with the actionable intelligence needed to inform, calibrate, and justify their risk-policy decisioning.

Comparison of AVMetrics' testing on multiple dimensions describing how AVMetrics' testing is broader, more independent, more rigorous, and more useful in no small part because it objectively compares models to each other.

 

As regulatory expectations around AVM fairness continue to mature, institutions must move beyond model-specific assurances toward independent, repeatable, and scalable evaluation frameworks. AVMetrics’ fair housing methodology is purpose-built to meet these expectations, providing lenders with nationally consistent, statistically rigorous, and model-agnostic evidence of AVM performance and potential disparate impact. By aligning testing design with supervisory standards and real-world production environments, AVMetrics enables institutions not only to identify and manage fair-lending risk, but also to demonstrate compliance with confidence in increasingly complex valuation ecosystems.

AVMs React to New Final AVM Rules

On August 16th, Jon Wierks from First American penned an article about how First American is reacting to the new AVM Final Ruling. The article made several interesting points:

1. First American has specifically enhanced its AVM, their testing, and some of their tools in anticipation of the new rules. For example, FA has invested in explainable AI (xAI) in order to address fairness concerns.

Newer AVMs, like our Procision AVM Suite, were designed to comply with current AVM guidelines and in anticipation of the new guidelines. 

2. First American expects AVM users to be expected to take on their own testing responsibility, and this doesn’t just apply to banks.

…new guidelines, Quality Control Standards for Automated Valuation Models, requires mortgage originators and secondary market issuers “to maintain policies, practices, procedures, and control systems to ensure that automated valuation models used in these transactions adhere to quality control standards

3. AEI’s recent AVM study has drawn attention to the biggest issues with AVM testing, and our new testing techniques are advancing testing beyond any other innovation in a decade.

For several years, AVMetrics has been developing a blind testing system that it will roll out later this year. Rather than sending the same addresses to various providers each month and getting back their valuations, AVM providers will now value every property in the U.S. — more than 100 million valuations each month — and send this data to AVMetrics. The testing company will ingest this data and then blind test it against future sales and listing prices as they transact. As you would expect, this is a massive undertaking for AVM vendors and AVMetrics, but it will separate the AVMs that test well from those that actually perform well in real-world conditions.

Wierks’ conclusions are right on target with our beliefs that improving AVM accuracy, precision and confidence scoring are making them more useful to industry, and that appropriate testing is a prerequisite to their widespread adoption.

 

Introducing PTM™ – Revolutionizing AVM Testing for Accurate Property Valuations

When it comes to residential property valuation, Automated Valuation Models (AVMs) have a lurking problem. AVM testing is broken and has been for some time, which means that we don’t really know how much we can or should rely on AVMs for accurate valuations.

Testing AVMs seems straightforward: take the AVM’s estimate and compare it to an arm’s length market transaction. The approach is theoretically sound and widely agreed upon but unfortunately no longer possible.

Once you see the problem, you cannot unsee it. The issue lies in the fact that most, if not all, AVMs have access to multiple listing data, including property listing prices. Studies have shown that many AVMs anchor their predictions to these listing prices. While this makes them more accurate when they have listing data, it casts serious doubt on their ability to accurately assess property values in the absence of that information.

Three months of data showing estimates by three AVMs for a single property in Austin, TX.
Three AVMs valuing a home before and after it was listed in the MLS from Realtor.com’s RealEstimateSM.

All this opens up the question: what do we want to use AVMs for? If all we want is to get a good estimate of what price a sale will close at, once we know the listing price, then they are great. However, if the idea is to get an objective estimate of the property’s likely market value to refinance a mortgage or to calculate equity or to measure default risk, then they are… well, it’s hard to say. Current testing methodology can’t determine how accurate they are.

But there is promise on the horizon. After five years of meticulous development and collaboration with vendors/models, AVMetrics is proud to unveil our game-changing Predictive Testing Methodology (PTM™), designed specifically to circumvent the problem that is invalidating all current testing. AVMetrics’ new approach will replace the current methods cluttering the landscape and finally provide a realistic view of AVMs’ predictive capabilities.1

At the heart of PTM™ lies our extensive Model Repository Database (MRD™), housing predictions from every participating AVM for every residential property in the United States – an astonishing 100 to 120 million properties per AVM. With monthly refreshes, this database houses more than a billion records per model and thereby offers unparalleled insights into AVM performance over time.

But tracking historical estimates at massive scale wasn’t enough. To address the influence of listing prices on AVM predictions, we’ve integrated a national MLS database into our methodology. By pinpointing the moment when AVMs gained visibility into listing prices, we can assess predictions for sold properties just before this information influenced the models, which is the key to isolating confirmation bias. While the concept may seem straightforward, the execution is anything but. PTM™ navigates a complex web of factors to ensure a level playing field for all models involved, setting a new standard for AVM testing.

So, how do we restore confidence in AVMs? With PTM™, we’re enabling accurate AVM testing, which in turn paves the way for more accurate property valuations. Those, in turn, empower stakeholders to make informed decisions with confidence. Join us in revolutionizing AVM testing and moving into the future of improved property valuation accuracy. Together, we can unlock new possibilities and drive meaningful change in the industry.

1The majority of the commercially available AVMs support this testing methodology, and there is over two solid years of testing that has been conducted for over 25 models.

Feds to Lenders: Take AVMs Seriously

Regulators are signaling that they are going to be looking at how AVMs are used and whether lenders have appropriately tested them and continuously monitor them for valuation discrimination. This represents a change in the focus on AVMs and the need for all lenders to focus on AVM validation to avoid unfavorable attention from government regulators.

On Feb 12, the FFIEC issued a statement on examinations from regulators. It specifically stated that it didn’t represent a change in principles, nor a change in guidance, and not even a change in focus. It was just a friendly announcement about the exam process, which will focus on whether institutions can identify and mitigate bias in residential property valuations.

Law firm Husch Blackwell published their interpretation a week later. Their analysis included consideration of the June 2023 FFIEC statement on the proposed AVM quality control rule, which would include bias as a “fifth factor” when evaluating AVMs. They interpret these different announcements as part of a theme, an extended signal to the industry that all valuations, and AVMs in particular, are going to receive additional scrutiny. Whether that is because bias is as important as quality or because being unbiased is an inherent aspect of quality, the subject of bias is drawing attention, but the result will be a thorough examination of all practices around valuation, including AVMs, from oversight to validation, training, auditing, etc.

AVM quality has theoretically been an issue that could be enforced by regulators in some circumstances for over a decade. What we’re seeing is not just an expansion from accuracy into questions of bias. We’re also seeing an expansion from banks into all lenders, including non-bank lenders. And, they are signaling that examinations will focus on bias, which is an expansion from the theoretical requirement to an actual, manifest, serious requirement.

Our Perspective on Brookings’ AVM Whitepaper

As the publisher of the AVMNews, we felt compelled to respond to the Brookings’ very thorough whitepaper on AVMs (Automated Valuation Models) published on October 12, 2023, and share our thoughts on the recommendations and insights presented therein.

First and foremost, I would like to acknowledge the thoroughness and dedication with which Brookings conducted their research. Their whitepaper contains valuable observations, clear explanations and wise recommendations that unsurprisingly align with our own perspective on AVMs.

Here’s our stance on key points from Brookings’ whitepaper:

  1. Expanding Public Transparency: We wholeheartedly support increased transparency in the AVM industry. In fact, Lee’s recent service on the TAF IAC AVM Task Force led to a report recommending greater transparency measures. Transparency not only fosters trust but also enhances the overall reliability of AVMs.
  2. Disclosing More Information to Affected Individuals: We are strong advocates for disclosing AVM accuracy and precision measures to the public. Lee’s second Task Force report also recommended the implementation of a universal AVM confidence score. This kind of information empowers individuals with a clearer understanding of AVM results.
  3. Guaranteeing Evaluations Are Independent: Ensuring the independence of evaluations is paramount. Compliance with this existing requirement should be non-negotiable, and we fully support this recommendation.
  4. Encouraging the Search for Less Discriminatory AVMs: Promoting the development and use of less discriminatory AVMs aligns with our goals. We view this as a straightforward step toward fairer AVM practices.

Regarding Brookings’ additional points 5, 6, and 7, we find them to be aspirational but not necessarily practical in the current landscape. In the case of #6, regulating Zillow, it appears that existing and proposed regulations adequately cover entities like Zillow, provided they use AVMs in lending.

While we appreciate the depth of Brookings’ research, we would like to address a few misconceptions within their paper:

  1. Lender Grade vs. Platform AVMs: We firmly believe that there is a distinction between lender-grade and platform AVMs, as evidenced by our testing and assessments. Variations exist not only between AVM providers but also within the different levels of AVMs offered by a single provider.
  2. “AVM Evaluators… Are Not Demonstrably Informing the Public:” We take exception to this statement. We actively contribute to public knowledge through articles, analyses, newsletters (AVMNews and our State of AVMs), quarterly GIF, a comprehensive Glossary, and participation in industry groups, task forces. We also serve the public by making AVM education available, and we would have been more than willing to collaborate or consult with Brookings during their research.

But, we’re obligated not to just give away our analysis or publish it. Our partners in the industry provide us their value estimates and we provide our analysis back to them. It’s a major way in which they improve, because they’re able to see 1) an independent test of accuracy, and 2) a comparison to other AVMs. They can see where they’re being beaten, which means opportunity for improvement. But, in order to participate, they require some confidentiality to protect their IP and reputation.

We should comment on the concept of independence that Brookings emphasized. Independent evaluation is exceedingly important in our opinion, as the only independent AVM evaluator. Brookings mentioned in passing that Mercury is not independent, but they also mentioned Fitch as an independent evaluator. We agree with Brookings that a vendor who also sells, builds, resells, uses or advocates for certain AVMs may be biased (or may appear to be biased) in auditing them; validation must be able to “effectively challenge” the models being tested.

We do not believe Fitch satisfies ongoing independent testing, validation and documentation of testing which requires resources with the competencies and influences to effectively challenge AVM models. Current guidelines require validation to be performed in real-world conditions, to be ongoing, and to be reported on at least annually.  When there are changes to the models, the business environment or the marketplace, the models need to be re-validated.

Fitch’s assessment of AVM providers is focused on each vendor’s model testing results, review of management and staff experience, data sourcing, technology effectiveness and quality control procedures. Fitch’s methodology of relying on analyses obtained from the AVM providers’ model testing results would not categorize them as an “independent AVM evaluator,” as reliance on testing done by the AVM providers themselves does not meet any definition of “independent” per existing regulatory guidance. AVMetrics is in no way beholden to the AVM developers or the resellers in any way; we draw no income from selling, developing, or using AVM products.

For almost two decades, we have continued to test AVMs against hundreds of thousands (sometimes millions) of transactions per quarter and use a variety of techniques to level the playing field between AVMs. We provide detailed and transparent statistical summaries and insights to our newsletter readers, and we publish charts that give insights into the depth and thoroughness of our analysis, whereas we have not observed this from other testing entities. Our research spanning eighteen years shows that even overall good-preforming models are less reliable in certain circumstances, so one of the less obvious risks that we would highlight is reliance on a “good” model that is poor in a specific geography, price level or property type. Models should be tested in each one of these subcategories in order to assess their reliability and risk profile. Identifying “reliable models” isn’t straightforward. Performance varies over time as market conditions change and models are tweaked. Performance also varies between locations, so a model that is extremely reliable overall may not be effective in a specific region. Furthermore, models that are effective overall may not be effective at all price levels, for example: low-priced entry-level homes or high-priced homes. Finally, very effective models will also produce estimates that they admit have lower confidence scores (and higher FSDs), and which should in all prudence be avoided, but without adequate testing and understanding may be inadvertently relied upon. Proper testing and controls can mitigate these problems.

Regarding cascades, the Brookings’ paper leans on cascades as an important part of the solution for less discriminatory AVMs. We agree with Brookings: a cascade is the most sophisticated way to use AVMs.  It maximizes accuracy and minimizes forecast error and risk. By subscribing to multiple AVMs, you can rank-order them to choose the highest performing AVM for each situation, which we call using a Model Preference Table™. The best possible AVM selection approach is a cascade, which combines that MPT™ with business logic to define when an AVM’s response is acceptable and when it should be set aside for the next AVM or another form of valuation.  The business logic can incorporate the Forecast Standard Deviation provided by the model and the institution’s own risk-tolerance to determine when a value estimate is acceptable.

Mark Sennott (industry insider) recently published a whitepaper describing current issues with cascades, namely that some AVM resellers will give favorable positions to AVMs based on favors, pricing or other factors that do NOT include performance as evaluated by independent firms like AVMetrics. This goes to the additional transparency for which Brookings’ advocates. We’re all in favor.

We actually see a strong parallel between Mark Sennott’s whitepaper and the Brookings’ paper. Brookings makes the case to regulators, whereas Sennott was speaking to the AVM industry, but both of them argue for more transparency and responsible leadership by the industry. Sennott appears to be very prescient, in retrospect.

In order to ensure that adequate testing is done regularly we recommend that a control be implemented to create transparency around how the GSE’s or other originators are performing their testing. This could be done in a variety of ways. One method might require the GSE or lending institution to indicate their last AVM testing date on each appraisal waiver. Regardless of how it’s done, the goal would be to create a mechanism that would increase commitment to appropriate testing. The GSE’s could provide a leadership role by demonstrating how they would like lending institutions to demonstrate their independent AVM testing as required by OCC 2010-42 and 2011-12.

In conclusion, we appreciate Brookings’ dedication to asking questions and providing perspective on the AVM industry. We share their goals for transparency, fairness, and accuracy. We believe that open dialogue and collaboration by all the valuation industry participants are the keys to advancing the responsible use of AVMs.

We look forward to continuing our contributions to the AVM community and working toward a brighter future for this essential technology.

How AVMetrics Tests AVMs Using our New Testing Methodology

Testing an AVM’s accuracy can actually be quite tricky. You might think that you simply compare an AVM valuation to a corresponding actual sales price – technically a fair sale on the open market – but that’s just the beginning. Here’s why it’s hard:

  • You need to get those matching values and benchmark sales in large quantities – like hundreds of thousands – if you want to cover the whole nation and be able to test different price ranges and property types (AVMetrics compiled close to 4 million valid benchmarks in 2021).
  • You need to scrub out foreclosure sales and other bad benchmarks.
  • And perhaps most difficult, you need to test the AVMs’ valuations BEFORE the corresponding benchmark sale is made public. If you don’t, then the AVM builders, whose business is up-to-date data, will incorporate that price information into their models and essentially invalidate the test. (You can’t really have a test where the subject knows the answer ahead of time.)

Here’s a secret about that third part: some of the AVM builders are also the same companies that are the premier providers of real estate data, including MLS data. What if the models are using MLS data listing price feeds to “anchor” their models based on the listing price of a home? If they are the source of the data, how can you test them before they get the data? We now know how.

We have spent years developing and implementing a solution because we wanted to level the playing field for every AVM builder and model. We ask each AVM to value every home in America each month. They each provide +/-110 million AVM valuations each month. There are over 25 different commercially available AVMs that we test regularly. That adds up to a lot of data.

A few years ago, it wouldn’t have been feasible to accumulate data at that scale. But now that computing and storage costs make it feasible, the AVM builders themselves are enthusiastic about it. They like the idea of a fair and square competition. We now have valuations for every property BEFORE it’s sold, and in fact, before it’s listed.

As we have for well over a decade now, we gather actual sales to use as the benchmarks against which to measure the accuracy of the AVMs.  We scrub these actual sales prices to ensure that they are for arm’s-length transactions between willing buyers and sellers — the best and most reliable indicator of market value. Then we use proprietary algorithms to match benchmark values to the most recent usable AVM estimated value. Using our massive database, we ensure that each model has the same opportunity to predict the sales price of each benchmark.

AVMetrics next performs a variety of statistical analyses on the results, breaking down each individual market, each price range, and each property type, and develops results which characterize each model’s success in terms of precision, usability, error and accuracy.  AVMetrics analyzes trends at the global, market and individual model levels. We also identify where there are strengths and weaknesses and where performance improved or declined.

In the spirit of continuous improvement, AVMetrics provides each model builder an anonymized comprehensive comparative analysis showing where their models stack up against all of the models in the test; this invaluable information facilitates their ongoing efforts to improve their models.

Finally, in addition to quantitative testing, AVMetrics circulates a comprehensive vendor questionnaire semi-annually.  Vendors that wish to participate in the testing process answer roughly 100 parameter, data, methodology, staffing and internal testing questions for each model being tested.  These enable AVMetrics and our clients to understand model differences within both testing and production contexts. The questionnaire also enables us and our clients to satisfy certain regulatory requirements describing the evaluation and selection of models (see OCC 2010-42 and 2011-12).

 

 

 

Property Inspection Waivers Took Off After the Pandemic Set In

Appraisals are the gold standard when it comes to valuing residential real estate, but they aren’t always necessary. They’re expensive and time-consuming, and in the era of COVID-19, they’re inconvenient. What’s the alternative?

Well, Fannie and Freddie implemented a “Property Inspection Waiver” (PIW) alternative more than a decade ago. However, it’s been slow to catch on.

But now, maybe the tipping point has arrived during the pandemic. Recently published data by Fannie and Freddie show approximately 33% of properties were valued without a traditional appraisal! (Most, if not all, would have used an AVM as part of the appraisal waiver process.) Ed Pinto at AEI’s Housing Center calls it a hockey stick.

https://www.aei.org/research-products/report/prevalence-of-appraisal-waivers-at-the-gses-including-cltv-statistics/

So, what changed? Here are some thoughts and hypotheses:

  1. Guidelines changed a little. We can see in the data that Freddie did almost zero PIWs on cash out loans, but in May that changed, and at lease for LTVS below 70%, they did almost 15,000 cash out loans with no appraisal.
  2. AVMs changed. Back when PIWs were introduced, AVMs operated in a +/- 10% paradigm. They were more concerned with hit rates than anything else, and they worked best on track homes. But, today they are operating in a +/- 4% world, hit rates are great, and cascades allow lenders to pick the AVM that’s most accurate for the application.
  3. Borrowers changed. These days, borrowers have grown up with online tools that give them answers. They are more likely to read about their symptoms on WebMD before going to the doctor, and they are more likely to look their home up on Zillow before calling their realtor. In the past, if home was purchased with a low LTV, who was it that required an appraisal? Typically, it was borrowers that wanted the appraisal – more as a safety blanket than anything else. They wanted reassurance that they were not getting ripped off. Today, for some people, Zillow can provide that reassurance without the $500 expense.
  4. Lenders changed. You would think that they are nimble and adaptable to new opportunities. But where the rubber meets the road, it’s still people talking to customers, and underwriters signing off on loans. If loan officers aren’t aware of the guidelines, they’ll just order an appraisal. Often ordering an appraisal, because it can take so long, is just about one of the first things done in the process, regardless of whether it’s necessary. After all, it’s usually necessary, and it takes SO long (relatively speaking, of course). I have known lenders who required their loan officers to collect money for an appraisal to demonstrate customer commitment. But, lenders are starting to incorporate PIWs into their processes and take advantage of those opportunities to present a loan option with $500 less in costs.

Accurate AVMs are a necessary but not sufficient criteria for PIWs, and now that AVMs are much more accurate, PIWs are much more practical, and we’re seeing much higher adoption.

So now what should we expect going forward? The trend will likely continue. There’s a lot of room left in some of those categories for PIWs to grab a larger share.

If agencies are doing it, everyone else will. If there are lenders not using PIWs to the extent possible, they are going to be at a disadvantage.