Tag: AVMs

Honors for the #1 AVM Changes Hands in Q3

Graphic showing which AVM was tops in each county over the last 8 quarters. Shows constantly changing colors. 16 or 17 AVMs claim the top spot in at least one county each quarter.
The graphic shows which AVM was tops in each county over the last 8 quarters.

We’ve got the update for Q3 2022. Our top AVM GIF shows the #1 AVM in each county going back 8 quarters. This graphic demonstrates why we never recommend using a single AVM. Again, there are 19 AVMs in the most recent quarter that are “tops” in at least one county!

The expert approach is to use a Model Preference Table® to identify the best AVM in each region. (Actually, our MPT® typically identifies the top 3 AVMs in each county.) Or, you could use a cascade to tap into the best AVM for whatever your application.

This time, the Seattle area and the Los Angeles region stayed light blue, just like the previous quarter. But, most of the populous counties in Northern California changed hands. Sacramento was the exception, but Santa Clara, Alameda, Contra Costa, San Mateo and some smaller counties like Calaveras (which means “skulls”) changed sweaters. Together they account for 6 million northern Californians who just got a new champion AVM.

A number of rural states changed hands almost completely… again. New Mexico, Wyoming, North Dakota, South Dakota, Montana and Nebraska as well as Arkansas, Mississippi, Alabama and rural Georgia crowned different champions for most counties. I could go on.

All that goes to show the importance of using multiple AVMs and getting intelligence on how accurate and precise each AVM is.

 

Honors for the #1 AVM Changes Hands in Q2

The number one AVM in each county over the last 8 quarters. The honors for the best AVM changes hands pretty frequently.
#1 AVM in each county in Q2 2022.

We’ve got the update for Q2 2022. Our top AVM GIF shows the #1 AVM in each county going back 8 quarters. This graphic demonstrates why we never recommend using a single AVM. There are 19 AVMs in the most recent quarter that are “tops” in at least one county (one more than in Q1)!

The expert approach is to use a Model Preference Table® to identify the best AVM in each region. (Actually, our MPT® typically identifies the top 3 AVMs in each county.)

One great example is the Seattle area. Over the last two years, you would need seven AVMs to cover the most populous 5 counties of the Seattle environs with the best AVM. What’s more, the King’s County champion AVM has included 3 different AVMs.

A number of rural states changed hands almost completely. New Mexico, Wyoming, North Dakota, South Dakota, Montana and Kansas crowned different champions for most counties.

All that goes to show the importance of using multiple AVMs and getting intelligence on how accurate and precise each AVM is.

 

AVM Regulation – Twists and Turns to Get Here

The Era of Full Steam Ahead!

Six months before the pandemic, we published an article on the outlook for regulation related to AVMs. At the time, we identified three trends.

  1. The administration was encouraging more use of AVMs (e.g., via hybrids), and tempering that with calls for close monitoring of AVMs.
  2. The de minimis threshold change foreshadowed an increase in reliance on AVMs in some lower value mortgages.
  3. The Appraisal Subcommittee summit was focused on standardization across agencies and alternative valuation products, namely, AVMs. Conversation focused on quality and risk as well as speed.

We saw those trends pointing to increased AVM use balanced by a focus on risk, quality and efficiency.

Sure enough, the following events unfolded:

  1. The de minimis threshold was indeed raised, right before the pandemic changed everything.
  2. The appraisal business was turned upside down for a period during the pandemic.
  3. Property Inspection Waivers (PIWs) took off in a big way as Fannie and Freddie skipped appraisals on a huge percentages of their originations (up to 40% at times).

Halt! About Face!

And then the new administration changed the focus entirely. No longer were the conversations about speed, efficiency, quality, risk and appraisers being focused on their highest and best use. Instead, conversations focused on bias.

Fannie produced a report on bias in appraisals. CFPB began moving on new AVM guidelines and proposed using the “fifth factor” to measure Fair Lending implications for AVMs. Congress held committee hearings on AVM bias.

New Direction

Then The Appraisal Foundation’s Industry Advisory Council produced an AVM Task Force Report. Two of AVMetrics’ staff participated on the task force and helped present its findings recently in Washington D.C.

The Task Force made specific recommendations, but first it helped educate regulators about the AVM industry.

One specific recommendation was to consider certification for AVMs. Another was to use the same USPAP framework for the oversight of AVMs as is used for the oversight of appraisals. It’s all laid out in the AVM Task Force Report.

Taking It All In

Our assessment three years ago was eerily accurate for the subsequent two years. Even the unexpected pandemic generally moved things in the direction that we were pointing to: increased use of AVMs through hybrids.

What we failed to anticipate back then was a complete change in direction with the new administration, and maybe that’s to be expected. It’s hard to see around the corner to a new administration, with new personnel, priorities and policy objectives.

The Task Force Report provides some very practical direction for regulations. But the recent emphasis on fair lending, which emerged after the Task Force began meeting and forming its recommendations, could influence the direction of things. The end result is a combination of more clarity and, at the same time, new uncertainty.

Honors for the #1 AVM Changes Hands

#1 AVM in each County for the last 8 quarters
Top AVM by county for the last 8 quarters shows a very dynamic market with constant lead changes.

We’ve updated our Top AVM GIF showing the #1 AVM in each county going back 8 quarters. This graphic demonstrates why we never recommend using a single AVM. There are 18 AVMs in the most recent quarter that are “tops” in at least one county!

The expert approach is to use a Model Preference Table to identify the best AVM in each region. (Actually, our MPT® typically identifies the top 3 AVMs in each county.)

Take the Seattle area for example. Over the last two years, you would almost always need two or three AVMs to cover the most populous 5 counties of the Seattle environs with the best AVM. However, it’s not always the same two or three. There are four of them that cycle through the top spots.

Texas is dominated by either Model A, Model P or Model Q. But that domination is really just a reflection of the vast areas of sparsely inhabited counties. The densely populated counties in the triangle from Dallas south along I-35 to San Antonio and then east along I-10 to Houston cycle through different colors every quarter. The bottom line in Texas is that there’s no single model that is best in Texas for more than a quarter, and typically, it would require four or five models to cover the populous counties effectively.

 

Demystifying home pricing models with Lee Kennedy

Earlier this year, Lee Kennedy appeared with Matthew Blake on the HousingWire Daily podcast, Houses in Motion:

They covered a number of topics in valuations, from iBuying to AVMs, including:

  • Democratizing the treasure trove of appraisal data that Fannie Mae maintains
  • The inputs into AVMs
  • What fraction of the housing market can effectively be valued by AVMs
  • How to use multiple AVMs effectively
  • What complexities Zillow was dealing with in their iBuying endeavor

 

Demystifying home pricing models with Lee Kennedy

AVMetrics Responds to FHFA on New Appraisal Practices

FHFA, the oversight agency for Fannie Mae and Freddie Mac, published a Request for Input on December 28, 2020. The RFI covered Appraisal-Related Policies, Practices and Processes. AVMetrics put forth a response including several pages and several exhibits making the case for using AVMs responsibly and effectively in a Model Preference Table®. Here is the Executive Summary:

The lynchpin to many of the appraisal alternatives is an Automated Valuation Model, a subject which AVMetrics has studied assiduously and relentlessly for more than 15 years. We point out that even an excellent AVM can be improved by the use of a Model Preference Table. MPTs enable better accuracy, fewer “no hits” and fewer overvaluations.

We also suggest an escalated focus on AVM testing, and we use our own research and citations of OCC Interagency Guidelines to emphasize the importance of testing to effectively use AVMs. We suggest that an “FSD Analysis” like the one we describe reduces risk by avoiding higher risk circumstances for using an AVM.

We suggest that the implementation of a universal MPT by the Enterprises will improve the collateral tools available and reduce the risk of manipulation by lenders. We also believe that a universal MPT can help redeploy appraisers to their highest and best use: the qualitative aspects of appraisal work. Our suggestion is that the GSEs endeavor to make the increased use of AVMs a benefit to appraisers, increasing their value-added and bringing them along in the transition.

AVMetrics’ full response is available here:

Four Points to Consider Before Outsourcing AVM Validation

AVMs are not only fairly accurate, they are also affordable and easy to use.  Unfortunately, using them in a “compliant” fashion is not as easy.  Regulatory Bulletins OCC 2010-42 and OCC 2011-12 describe a lot of requirements that can be challenging for a regional or community institution:

  1. ongoing independent testing and validation and documentation of testing;
  2. understanding each AVM model’s conceptual and methodological soundness;
  3. documenting policies and procedures that define how to use AVMs and when not to use AVMs;
  4. establishing targets for accuracy and tolerances for acceptable discrepancies. 

The extent to which these requirements are applied by your regulator is most likely proportional to the extent to which AVMs are used within your organization; if AVMs are used extensively, regulatory oversight will likely demand much tighter adherence to the requirements as well as much more comprehensive policies and procedures.

Although compliance itself is not a function that can be outsourced (it is the sole responsibility of the institution), elements of the regulatory requirements can be effectively handled outside the organization through outsourcing.  As an example, the first bullet point, “ongoing independent testing and validation and documentation of testing,” requires resources with the competencies and influences to effectively challenge AVM models. In addition, the “independent” aspect is challenging to accomplish unless a separate department within the institution is established that does not report up through the product and/or procurement verticals (e.g. similar to Audit, or Model Risk Management, etc.). Whether your institution is a heavy AVM user or not, the good news is that finding the right third-party to outsource to will facilitate all of the bullet points above:

  1. documentation is included as part of an independent testing and validation process and it can be incorporated into your policies and procedures;
  2. the results of the testing will help you shape your understanding of where and when AVMs can and cannot be used;
  3. the results of the testing will inform your decisions regarding the accuracy and performance thresholds that fit within your institution’s risk appetite. In addition,
  4. an outsourced specialist may also be able to provide various levels of consultation assistance in areas where you may not have the internal expertise.

Before deciding whether outsourcing makes sense for you, here are some potential considerations. If you can answer “no” to all of these questions, then outsourcing might be a good option, especially if you don’t have an independent Analytics unit in-house that has the resource bandwidth to accommodate the AVM testing and validation processes:

  1. Is this process strategically critical?  I.e., does your validation of AVMs benefit you competitively in a tangible way?
  2. If your validation of AVMs is inadequate, can this substantially affect your reputation or your position within the marketplace?
  3. Is outsourcing impractical for any reason?  I.e., are there other business functions that preclude separating the validation process?  
  4. Does your institution have the same data availability and economies of scale as a specialist?

The Way Forward

Here are some suggestions on how to go about preparing yourself for selecting your outsource partner:

  1. Specify what you need outsourced.  If you already have Policies and Procedures documented and processes in place, there may be no need to look for that capability, but there will necessarily still be the need to incorporate any testing and validation results into your existing policies and procedures.  If you have previously done extensive evaluations of the AVMs that you use, in terms of their models’ conceptual soundness and outcomes analysis, there’s no need to contract for that, either.  See our article on Regulatory Oversight to get some ideas about those requirements.
  2. Identify possible partners, such as AVMetrics, and evaluate their fit.  Here’s what to look for:
    • Expertise.  It’s a technical job, requiring a fair amount of analysis and a tremendous amount of knowledge about regulatory requirements in general, and specifically knowledge relative to AVMs; check the résumés of the experts with whom you plan to partner.
    • Independence.  A vendor who also sells, builds, resells, uses or advocates for certain AVMs may be biased (or may appear to be biased) in auditing them; validation must be able to “effectively challenge” the models being tested.
    • Track record.  Stable partners are better, and a long term relationship lowers the cost of outsourcing; so look for a partner with a successful track record in performing AVM validations.
  3. Open up conversations with potential partners early because the process can take months, particularly if policies and procedures need to be developed; although validations can be successfully completed in a matter of days, that is not the norm.
  4. Make sure your staff has enough familiarity with the regulatory requirements so as to be able to oversee the vendor’s work; remember that the responsibility for compliance is ultimately on you. Make sure the vendor’s process and results are clearly and comprehensively documented and then ensure that Internal Audit and Compliance are part of that oversight.  “Outsource” doesn’t mean “forget about it;” thorough and complete understanding and documentation is part of the requirements.
  5. Have a plan for ongoing compliance, whether it is to transition to internal resources or to retain vendors indefinitely.  Set expectations for the frequency of the validation process, which regulations require to be at least annually or more often, commensurate with the extent of your AVM usage.

In Conclusion

AVM testing and validation is only one component in your overall Valuation and evaluation program. Unlike Appraisals and some other forms of collateral valuation, AVMs, by their nature as a quantitative predictive model, lend themselves to just the type of statistically-based outcomes analysis the regulators set forth. Recognizing this, elements of the requirements can be an outsourced process, but it must be a compliment to enterprise-wide policies and practices around the permissible, safe and prudent use of valuation tools and technologies.

The process of validating and documenting AVMs may seem daunting at first, but for the past 10 years AVMetrics has been providing ease-of-mind for our customers, whether as the sole source of an outsourced testing and validation process (that tests every commercial AVM four times a year), or as a partner in transitioning the process in-house.  Our experience, professional resources and depth of data have enabled us to standardize much of the processing while still providing the customization every institution needs.  And probably one of the most critical boxes you can check off when outsourcing with AVMetrics is the very large one that requires independence. It also bears mentioning that having been around as long as we have, our customers have generally all been through at least one round of regulatory scrutiny, and the AVMetrics process has always passed regulatory muster.  Regulatory reviews already present enough of a challenge, so having a partner with established credentials is critical for a smooth process.

In the World of AVMs, Confidence Isn’t Overrated

Hit Rate is a key metric that AVM users care about. After all, if the AVM doesn’t provide a valuation, what’s the point? But savvy users understand that not all hits are created equal. In fact, they might be better off without some of those “hits.”

Every AVM builder provides a “confidence score” along with each valuation. Users often don’t know how much confidence to put in the confidence score, so we did some analysis to clarify just how much confidence is warranted.

In the first quarter of 2020, we grouped hundreds of thousands of AVM valuations from five AVMs by their confidence score ranges. For convenience’s sake, we grouped them into “high,” “medium,” “low” and “fuhgeddaboutit” (aka, “not rated”).[1] And, we analyzed the AVM’s performance against benchmarks in the same time periods. What we found won’t surprise anyone at first glance:

  • Better confidence scores were highly correlated with better AVM performance.
  • The lower two tiers were not even worth using.
  • The majority of valuations are in the top one or two tiers.

However, consider that unsophisticated users might simply use a valuation returned by an AVM regardless of the confidence score. One rationale is that any value estimate is better than nothing, and this is the valuation that is available. Other users may not know how seriously to take the “confidence score;” they may figure that the AVM supplier is simply hedging a bit more on this valuation.[2]

Figure 1 shows the correlation for Model #4 in our test between the predicted price and the actual sales price for each group of model-supplied confidence scores. As you can see, as the confidence score goes up so does the correlation[3] of the model and the accuracy of the prediction as evidenced by the drop in the Average Variance.

Figure 1 Variance and correlation between model prediction and sales price, grouped by confidence scores

Table 1 lays out 4 key performance metrics for AVMs. They demonstrate markedly different performance for different confidence score buckets. For example, the “high” confidence score bucket for Model 1 performs significantly better in every metric than the other buckets, and what’s more that confidence bucket makes up 80% of the AVM valuations returned by Model 1.

Table 1 Q1 2020 performance of 5 actual commercial grade AVMs measured against benchmarks
  • Avg Variance [4] of 0.7% shows valuations that center very near the benchmarks, whereas lower confidence scores show a strong tendency to overvalue by 4-7%.
  • Avg Absolute Variance [5] of 4.4% shows fairly tight (precise) valuations, whereas the other buckets are all double-digits.
  • PPE10 [6] of 90% means that 90% of “high” confidence score valuations are within +/- 10%. Other confidence buckets range from 67% to even below 50%.
  • PPE>20 [7] measures excessive overvaluations (greater than 20%), which can create very high-risk situations for lenders. In the “high” confidence bucket, they are almost nonexistent at 1.8%, but in other buckets they are 13%, 28% or even 31.6%.

This last metric mentioned is instructive. Model 1 is a very-high-performing AVM. However, in a certain small segment (about 3%), acknowledged by very low confidence scores, the model has a tendency to over-value properties by 20% or more almost one-third of the time.

The good news is that the model warns users of the diminished accuracy of certain estimates, but it’s up to the user to realize when to disregard those valuations. A close look at the table shows that with different models, there are different cut-offs that might be appropriate. Not every user’s risk appetite is the same, but we’ve highlighted certain buckets that might be deemed acceptable.

Model 2 and Model 5, for example, have very different profiles. Whereas Model 1 produced a majority of valuations with a “high” confidence level, Model 2 and Model 5 put very few valuations into that category. “Confidence scores” don’t have a fixed method of calculation that is standardized between Models. It’s possible that Model 2 and Model 5 use their labels more conservatively. That’s one more reason that users should test the models that they use and not simply expect them to perform similarly and use labels consistently.

That leads into a third conclusion that leaps out of this analysis. There’s a huge advantage to having access to multiple models and the ability to pick and choose between them. It’s not immediately apparent from this analysis, but these models are not all valuing the same properties with “high” confidence (this will be analyzed in two follow-up papers in this series). Model 4 is our top-ranked model overall. However, as shown in Table 2, there are tens of thousands of benchmarks that Model 4 valued with only “medium” or “low” or even “not rated” confidence but for which Model 1 had “high” confidence valuations.

Table 2 The same Q1 2020 comparison against benchmarks, but we removed the benchmarks for which Model 4 had “high” confidence in its valuations leaving a sample size of 129,237 for the other models to value

Different models have strengths in different geographic areas, with different property types or even in different price ranges. The ideal situation is to have several layers of backups, so that if your #1 model struggles with a property and produces a “low” confidence valuation, you have the ability to turn to a second or third model to see if they have a better estimate. This last point is the purpose of Model Preference Tables®. They specify which model ranks first second and third across every geography, property type and price tranche.  And, users may find that some models are only valuable as a second or third choice in some regions, but by adding them to the panel, the user can avoid that dismal dilemma: “Do I use this valuation that I expect is awful – what other choice do I have?”


[1] We grouped valuations as follows: <70% were considered “not rated,” 70-80% were considered “low,” 80-90% “medium,” and 90+ “high.”

[2] In fact, this isn’t wrong in some cases. For example, in the case of Model 2, the “medium” and “high” confidence valuations don’t differ significantly.

[3] The correlation coefficient indicates the strength of the relationship between two variables can be found using the following formula:

Where:

  • rxy – the correlation coefficient of the linear relationship between the variables x and y
  • x– the values of the x-variable in a sample
  •  – the mean of the values of the x-variable
  • yi – the values of the y-variable in a sample
  • ȳ – the mean of the values of the y-variable

[4] Mean Error (ME)

[5] Mean Absolute Error (MAE)

[6] Percentage Predicted Error within +/- 10%

[7] Percentage Predicted Error greater than 20%, aka Right Tail Error