FHFA, the oversight agency for Fannie Mae and Freddie Mac, published a Request for Input on December 28, 2020. The RFI covered Appraisal-Related Policies, Practices and Processes. AVMetrics put forth a response including several pages and several exhibits making the case for using AVMs responsibly and effectively in a Model Preference Table®. Here is the Executive Summary:
The lynchpin to many of the appraisal alternatives is an Automated Valuation Model, a subject which AVMetrics has studied assiduously and relentlessly for more than 15 years. We point out that even an excellent AVM can be improved by the use of a Model Preference Table. MPTs enable better accuracy, fewer “no hits” and fewer overvaluations.
We also suggest an escalated focus on AVM testing, and we use our own research and citations of OCC Interagency Guidelines to emphasize the importance of testing to effectively use AVMs. We suggest that an “FSD Analysis” like the one we describe reduces risk by avoiding higher risk circumstances for using an AVM.
We suggest that the implementation of a universal MPT by the Enterprises will improve the collateral tools available and reduce the risk of manipulation by lenders. We also believe that a universal MPT can help redeploy appraisers to their highest and best use: the qualitative aspects of appraisal work. Our suggestion is that the GSEs endeavor to make the increased use of AVMs a benefit to appraisers, increasing their value-added and bringing them along in the transition.
AVMs are not only fairly accurate, they are also affordable and easy to use. Unfortunately, using them in a “compliant” fashion is not as easy. Regulatory Bulletins OCC 2010-42 and OCC 2011-12 describe a lot of requirements that can be challenging for a regional or community institution:
ongoing independent testing and validation and documentation of testing;
understanding each AVM model’s conceptual and methodological soundness;
documenting policies and procedures that define how to use AVMs and when not to use AVMs;
establishing targets for accuracy and tolerances for acceptable discrepancies.
The extent to which these requirements are applied by your regulator is most likely proportional to the extent to which AVMs are used within your organization; if AVMs are used extensively, regulatory oversight will likely demand much tighter adherence to the requirements as well as much more comprehensive policies and procedures.
Although compliance itself is not a function that can be outsourced (it is the sole responsibility of the institution), elements of the regulatory requirements can be effectively handled outside the organization through outsourcing. As an example, the first bullet point, “ongoing independent testing and validation and documentation of testing,” requires resources with the competencies and influences to effectively challenge AVM models. In addition, the “independent” aspect is challenging to accomplish unless a separate department within the institution is established that does not report up through the product and/or procurement verticals (e.g. similar to Audit, or Model Risk Management, etc.). Whether your institution is a heavy AVM user or not, the good news is that finding the right third-party to outsource to will facilitate all of the bullet points above:
documentation is included as part of an independent testing and validation process and it can be incorporated into your policies and procedures;
the results of the testing will help you shape your understanding of where and when AVMs can and cannot be used;
the results of the testing will inform your decisions regarding the accuracy and performance thresholds that fit within your institution’s risk appetite. In addition,
an outsourced specialist may also be able to provide various levels of consultation assistance in areas where you may not have the internal expertise.
Before deciding whether outsourcing makes sense for you, here are some potential considerations. If you can answer “no” to all of these questions, then outsourcing might be a good option, especially if you don’t have an independent Analytics unit in-house that has the resource bandwidth to accommodate the AVM testing and validation processes:
Is this process strategically critical? I.e., does your validation of AVMs benefit you competitively in a tangible way?
If your validation of AVMs is inadequate, can this substantially affect your reputation or your position within the marketplace?
Is outsourcing impractical for any reason? I.e., are there other business functions that preclude separating the validation process?
Does your institution have the same data availability and economies of scale as a specialist?
The Way Forward
Here are some suggestions on how to go about preparing yourself for selecting your outsource partner:
Specify what you need outsourced. If you already have Policies and Procedures documented and processes in place, there may be no need to look for that capability, but there will necessarily still be the need to incorporate any testing and validation results into your existing policies and procedures. If you have previously done extensive evaluations of the AVMs that you use, in terms of their models’ conceptual soundness and outcomes analysis, there’s no need to contract for that, either. See our article on Regulatory Oversight to get some ideas about those requirements.
Identify possible partners, such as AVMetrics, and evaluate their fit. Here’s what to look for:
Expertise. It’s a technical job, requiring a fair amount of analysis and a tremendous amount of knowledge about regulatory requirements in general, and specifically knowledge relative to AVMs; check the résumés of the experts with whom you plan to partner.
Independence. A vendor who also sells, builds, resells, uses or advocates for certain AVMs may be biased (or may appear to be biased) in auditing them; validation must be able to “effectively challenge” the models being tested.
Track record. Stable partners are better, and a long term relationship lowers the cost of outsourcing; so look for a partner with a successful track record in performing AVM validations.
Open up conversations with potential partners early because the process can take months, particularly if policies and procedures need to be developed; although validations can be successfully completed in a matter of days, that is not the norm.
Make sure your staff has enough familiarity with the regulatory requirements so as to be able to oversee the vendor’s work; remember that the responsibility for compliance is ultimately on you. Make sure the vendor’s process and results are clearly and comprehensively documented and then ensure that Internal Audit and Compliance are part of that oversight. “Outsource” doesn’t mean “forget about it;” thorough and complete understanding and documentation is part of the requirements.
Have a plan for ongoing compliance, whether it is to transition to internal resources or to retain vendors indefinitely. Set expectations for the frequency of the validation process, which regulations require to be at least annually or more often, commensurate with the extent of your AVM usage.
AVM testing and validation is only one component in your overall Valuation and evaluation program. Unlike Appraisals and some other forms of collateral valuation, AVMs, by their nature as a quantitative predictive model, lend themselves to just the type of statistically-based outcomes analysis the regulators set forth. Recognizing this, elements of the requirements can be an outsourced process, but it must be a compliment to enterprise-wide policies and practices around the permissible, safe and prudent use of valuation tools and technologies.
The process of validating and documenting AVMs may seem daunting at first, but for the past 10 years AVMetrics has been providing ease-of-mind for our customers, whether as the sole source of an outsourced testing and validation process (that tests every commercial AVM four times a year), or as a partner in transitioning the process in-house. Our experience, professional resources and depth of data have enabled us to standardize much of the processing while still providing the customization every institution needs. And probably one of the most critical boxes you can check off when outsourcing with AVMetrics is the very large one that requires independence. It also bears mentioning that having been around as long as we have, our customers have generally all been through at least one round of regulatory scrutiny, and the AVMetrics process has always passed regulatory muster. Regulatory reviews already present enough of a challenge, so having a partner with established credentials is critical for a smooth process.
Hit Rate is a key metric that AVM users care about. After all, if the AVM doesn’t provide a valuation, what’s the point? But savvy users understand that not all hits are created equal. In fact, they might be better off without some of those “hits.”
Every AVM builder provides a “confidence score” along with each valuation. Users often don’t know how much confidence to put in the confidence score, so we did some analysis to clarify just how much confidence is warranted.
In the first quarter of 2020, we grouped hundreds of thousands of AVM valuations from five AVMs by their confidence score ranges. For convenience’s sake, we grouped them into “high,” “medium,” “low” and “fuhgeddaboutit” (aka, “not rated”). And, we analyzed the AVM’s performance against benchmarks in the same time periods. What we found won’t surprise anyone at first glance:
Better confidence scores were highly correlated with better AVM performance.
The lower two tiers were not even worth using.
The majority of valuations are in the top one or two tiers.
However, consider that unsophisticated users might simply use a valuation returned by an AVM regardless of the confidence score. One rationale is that any value estimate is better than nothing, and this is the valuation that is available. Other users may not know how seriously to take the “confidence score;” they may figure that the AVM supplier is simply hedging a bit more on this valuation.
Figure 1 shows the correlation for Model #4 in our test between the predicted price and the actual sales price for each group of model-supplied confidence scores. As you can see, as the confidence score goes up so does the correlation of the model and the accuracy of the prediction as evidenced by the drop in the Average Variance.
Table 1 lays out 4 key performance metrics for AVMs. They demonstrate markedly different performance for different confidence score buckets. For example, the “high” confidence score bucket for Model 1 performs significantly better in every metric than the other buckets, and what’s more that confidence bucket makes up 80% of the AVM valuations returned by Model 1.
Avg Variance  of 0.7% shows valuations that center very near the benchmarks, whereas lower confidence scores show a strong tendency to overvalue by 4-7%.
PPE10  of 90% means that 90% of “high” confidence score valuations are within +/- 10%. Other confidence buckets range from 67% to even below 50%.
PPE>20  measures excessive overvaluations (greater than 20%), which can create very high-risk situations for lenders. In the “high” confidence bucket, they are almost nonexistent at 1.8%, but in other buckets they are 13%, 28% or even 31.6%.
This last metric mentioned is instructive. Model 1 is a very-high-performing AVM. However, in a certain small segment (about 3%), acknowledged by very low confidence scores, the model has a tendency to over-value properties by 20% or more almost one-third of the time.
The good news is that the model warns users of the diminished accuracy of certain estimates, but it’s up to the user to realize when to disregard those valuations. A close look at the table shows that with different models, there are different cut-offs that might be appropriate. Not every user’s risk appetite is the same, but we’ve highlighted certain buckets that might be deemed acceptable.
Model 2 and Model 5, for example, have very different profiles. Whereas Model 1 produced a majority of valuations with a “high” confidence level, Model 2 and Model 5 put very few valuations into that category. “Confidence scores” don’t have a fixed method of calculation that is standardized between Models. It’s possible that Model 2 and Model 5 use their labels more conservatively. That’s one more reason that users should test the models that they use and not simply expect them to perform similarly and use labels consistently.
That leads into a third conclusion that leaps out of this analysis. There’s a huge advantage to having access to multiple models and the ability to pick and choose between them. It’s not immediately apparent from this analysis, but these models are not all valuing the same properties with “high” confidence (this will be analyzed in two follow-up papers in this series). Model 4 is our top-ranked model overall. However, as shown in Table 2, there are tens of thousands of benchmarks that Model 4 valued with only “medium” or “low” or even “not rated” confidence but for which Model 1 had “high” confidence valuations.
Different models have strengths in different geographic areas, with different property types or even in different price ranges. The ideal situation is to have several layers of backups, so that if your #1 model struggles with a property and produces a “low” confidence valuation, you have the ability to turn to a second or third model to see if they have a better estimate. This last point is the purpose of Model Preference Tables®. They specify which model ranks first second and third across every geography, property type and price tranche. And, users may find that some models are only valuable as a second or third choice in some regions, but by adding them to the panel, the user can avoid that dismal dilemma: “Do I use this valuation that I expect is awful – what other choice do I have?”
 We grouped valuations as follows: <70% were considered “not rated,” 70-80% were considered “low,” 80-90% “medium,” and 90+ “high.”
 In fact, this isn’t wrong in some cases. For example, in the case of Model 2, the “medium” and “high” confidence valuations don’t differ significantly.
 The correlation coefficient indicates the strength of the relationship between two variables can be found using the following formula:
rxy – the correlation coefficient of the linear relationship between the variables x and y
The AVMNews sat down with our publisher Lee Kennedy to discuss trends in the industry.
AVMNews: Lee, as the Managing Director at AVMetrics, you’re sitting at the center of the Automated Valuation Model (AVM) industry. What changes have you seen recently?
Lee: There’s a lot going on. We see firsthand how the evolution of the technology has affected the sector dramatically. The availability of data and the decline in costs of storage and computing power have opened the doors to new competition. We see new entrants using new techniques and built by fresh faces. We still have a number of large players offering well-established AVMs. But, we also see the larger players retiring some of their older models. The established AVM players have responded in some cases by raising their game, and in other cases, by buying their upstart rivals. So, we’ve seen increased competition and increased consolidation at the same time.
And, it’s true that the tools keep getting better. It’s not evenly distributed, but on average they continue to do a better and better job.
AVMNews: In what ways do AVMs continue to get better?
Lee: AVMetrics has been conducting contemporaneous AVM testing for over a decade now, and we have many quantitative metrics showing how much better AVMs are getting. Specifically, we run statistical analysis around the comparison of AVM estimates to sales prices that are unknown to the models. We have seen increases in model accuracy rates measured by percentage of predicted error (PPE), mean absolute error (MAE) and a host of other metrics. Models are getting better at predicting sale prices and when they miss, they don’t miss by as much as they used to.
AVMNews: What about on the regulatory side?
Lee: There is always a lot going on. The regulatory environment has eased in the last two years reflecting a whole new attitude in Washington, D.C. – one that is more open to input and more interested in streamlining. Take, for instance, the 2018 Treasury report that focuses on advancing technologies (See “A Financial System That Creates Economic Opportunities”).
Last November, I was at a key stakeholder forum for the Appraisal Subcommittee (ASC). One area of focus was harmonizing appraisal requirements across agencies. Another major focus was how to effectively employ new tools in support of the appraisal industry, including the growth of Alternative Valuation Products that utilize AVMs.
AVMNews: I know that you also wrote a letter to the Federal Finance Institutions Examination Council (FFIEC) about raising the de minimis threshold, below which some lending guidelines would NOT require an appraisal. This year in July they elected to change the de minimus threshold from $250,000 to $400,000 for residential housing. What are your thoughts?
Lee: Well, I think that the question everyone is struggling with is “What does the future hold for appraisers and AVMs?” Obviously, the field of appraisers is shrinking, and AVMs are economical, faster and improving. How is this going to play out?
First, my strong feeling is that appraisers are a valuable and limited resource, and we need to employ them at their highest and best use. Trying to be a “manual AVM” is not their highest and best use. Their expertise should be focused on the qualitative aspects of the valuation process such as condition, market and locational influences, not the quantitative (facts) such as bed and bath counts. Models do not capture and analyze the qualitative aspects of a property very well.
Several companies are developing ways of merging the robust data processing capabilities of an AVM with the qualitative assessment skills of appraisers. Today, these products typically use an AVM at their core and then satisfy additional FFIEC evaluation criteria (physical property condition, market and location influences) with an additional service. For example, the lender can wrap a Property Condition Report (PCR) around the AVM and reconcile that data in support of a Home Equity Line of Credit (HELOC) lending decision. This type of hybrid product offering is on the track that we’re headed down. Many AMCs and software developers have already created these types of products for proprietary use or for use on multiple platforms.
AVMNews: AVMs were supposed to take over the world. Can you tell us what happened?
Lee: Well, the Financial Crisis is one thing that happened. Lawsuits ensued, and everyone got a lot more conservative. And, the success of AVMs developed into hype that was obviously unrealistic. But, AVMs are starting to gain traction again. We are answering a lot more calls from lenders who want help implementing AVMs in their origination processes. They typically need our help with policies and procedures to stay on the right side of the Office of the Comptroller of the Currency (OCC) regulations, and so in the last year, we’ve done training at several banks.
Everyone is quick to point out that AVMs are not infallible, but AVMs are pretty incredible tools when you consider their speed, accuracy, cost and scalability. And, they are getting more impressive. Behind the curtain the models are using neural networks and machine learning algorithms. Some use creative techniques to adjust prices conditionally in response to situational or temporary conditions. We test them and talk to their developers, and we can see how that creativity translates into improved performance.
AVMNews: You consult to litigants about the use of AVMs in lawsuits. How do you think legal decisions and risk will affect the use of AVMs?
Lee: This is an area of our business, litigation support, where I am restricted from saying very much. It has been and continues to be an enlightening experience as some of the best minds are involved in all aspects of collateral valuation and the “Experts” are truly that… experts in their fields as econometricians, statisticians, appraisers, modelers, etc.… It is also very interesting with over 50 cases behind us now, to get a look behind the legal system curtain and how all of that works. Therefore, I want to emphasize that my comments for our interview are in the context of contemporaneous AVMs that were tested during the time period shown here and not a retrospective AVM that was looking back to these time periods.
AVMNews: AVMetrics now publishes the AVM News – how did that come about?
Lee: As you and the many subscribers know, Perry Minus of Wells Fargo started that publication as a labor of love over a decade ago. When he retired recently, he asked if I would take over as the publisher. We were honored to be trusted with his creation, and we see it as a way to be good citizens and contribute to the industry as a whole.
The AVMNews is a quarterly newsletter that is a compilation of interesting and noteworthy articles, news items and press releases that are relevant to the AVM industry. Published by AVMetrics, the AVMNews endeavors to educate the industry and share knowledge about Automated Valuation Models for the betterment of everyone involved.
After determining that a transaction or property is suitable for valuation by an Automated Valuation Model (AVM), the first decision one must make is “Which AVM to use?” There are many options – over 20 commercially available AVMs – significantly more than just a few years ago. While cost and hit rate may be considerations, model accuracy is the ultimate goal. A few additional estimates that are off by more than 20 percent can seriously increase costs. Inaccuracy can increase second-looks, cause loans not to close at all or even stimulate defaults down the road.
Which is the best AVM?
We test the majority of residential models currently available, and in the nationwide test in Figure #1 below, Model AM-39 (not its real name) was the top of the heap. It has the lowest average (absolute) error (MAE) by .1 over the 2nd place model. Model AM-39 is a full percentage point better than the 5th ranked model, which is good, but that’s not everything. Model AM-39 has the highest percentage of estimates within +/- 10% (PPE10%). Model AM-39 has the 2nd lowest percentage of extreme overvaluations (>=20%, or RT20 Rate), an especially bad type of error indicating a significant overvaluation or Right Tailed error.
If you were shopping for an AVM, you might think that Model AM-39 is the obvious choice. This model performs at the top of the list in just about every measure, right? Well, not so fast. Consider that those measurements are based on testing AVM’s across the entire nation, and if you are only doing business in certain geographies, you might only care about which model or AVM is most accurate in those areas. Figure 2 shows a ranking of models in Nevada, and if your heart was set on Model AM-39, then you would be relieved to see that it is still in the top 5. And, in fact, it performs even better when limited to the State of Nevada. However, three models outperform Model AM-39, with Model X-24 leading the pack in accuracy (albeit with a lower Hit Rate).
So, now you might be sold on Model X-24, but you might still look a little deeper. If, for example, you were a credit union in Clark County, you might focus on performance there. While Clark County is pretty diverse, it’s quite different from most other counties in Nevada. In this case, Figure 3 shows that the best model is still, Model X-24, and it performs very well at avoiding extreme overvaluations.
However, if your Clark County Credit Union is focused on entry level home loans with properties values below $100K, you might want to check just that segment of the market. Figure 4 shows that Model X-24 continues to be the best performer in Clark County for this price tier. Note that the other top models, including Model AM-39, show significant weaknesses as their overvaluation tendency climbs into the teens. This is not a slight difference, and it could be important. Model AM-39 is seven times more likely than Model X-24 to overvalue a property by 20%, and those are high-risk errors.
Look carefully at the model results in Figure 4 and you’ll see that Model X-24, while being the most accurate and precise, has the lowest hit rate. That means that about 40% of the time, it does not return a value estimate. The implication is: you really want a second and a third AVM option.
Now let’s consider a different lending pattern for the Clark County credit union. Consider a high value property lending program and look at figure 5, which is an analysis of the over-$650K properties and how the models perform in that price tier. Figure 5 shows that Model X-24 is no longer in the top five models. The best performer in Clark County for this price tier is Model AM-39, with 92% within +/-10% and zero overvaluation error in excess of 20%. The other models in the top five also do a good job of valuing properties in this tier.
Figure 6 summarizes this exercise, which demonstrates the proper thinking when selecting models. First, focus on the market segment that you do business in – don’t use the model that performs best outside your service area. Second, rather than using a single model, you should use several models prioritized into what we call a “Model Preference Table®” in which models are ranked #1, #2, #3 for every segment of your market. Then, as you need to request an evaluation, the system should call the AVM in the #1 spot, and if it doesn’t get an answer, try the next model(s) if available.
In this way, you get the most competent model for the job. Even though one model will test better overall, it won’t be the best model everywhere and for every property type and price range. In our example, the #1 model in the nation was not the preferred model in every market segment we focused on. If we had focused on another geography or market segment, we almost certainly would have seen a reordering of the rankings and possibly even different models showing up in the top 5. The next quarter’s results might be different as well, because all the models’ developers are constantly recalibrating their algorithms; inputs and conditions are changing, and no one can afford to stand still.
For more than 12 years we’ve been testing AVMs and watching them improve over time. More model builders have developed better techniques, and with the falling cost of processing and storage, and with the improving availability of data, AVMs just continue to get better and better.
We aren’t the only ones noticing. We recently read with pleasure Craig Gilbert’s observations of the same phenomenon (Craig is an expert appraiser and co-founder of RAC – Relocation Appraisers and Consultants).
Since co-developing the AVM for Veros in 1999+, I’ve been predicting that AVMs would eventually morph over from Mortgage Origination & Portfolio Valuations, the primary intended uses, into Relocation buyouts. The question has been “when”, not “if”. Relocation represents a microcosm sub-market of the overall residential appraisal business – maybe 5% of the total?
Back in the early days, AVMs were not as accurate as they are today. This has changed. I was thinking about this very thing this morning before opening the current issue of Mobility Magazine, and there it was. The time has arrived.
Read Mobility Magazine December 2018 article “TECHNOLOGY TODAY – What’s Hot for Mobility” written by Steven M. John and Mary-Grace Ellington of HomeServices Relocation.
Here are a few excerpts from the article:
– Recent experiments to test reliability of AVMs show the results to be comparable to formal, in-person appraisals.”
– These valuation tools can save significant time and money while offering convenience.”
– A typical FAVM can be obtained for a fraction of the cost of a traditional appraisal.” [“F” = Forecasting]
– Target values are not fed into the models, and they are not subject to obvious human bias, so theirs perceived impartiality”
– Fidelity Residential Solutions has been at the forefront of testing these new tools.”
Some of you may know Lee Kennedy, an Independent AVM Expert, of AVMetrics, started by Lee in 2005. Lee is a really great guy, has been an appraiser since the mid-80’s, has testified as an expert witness on cases involving use of AVMs and the Financial Crisis and has spoken at recent A.I. Symposium. He’s like the AVM gate-keeper. In his blog titled “The Wild, Wild West of Automated Valuations”, there is a graph showing that the mean absolute error of tested AVMs decreased from 14.7% in 2009 to 5.8% in 2017 and 2018. This is for all AVMs in entire U.S.. Some of course are more accurate than a +-5.7% error rate, when drilling down to specific neighborhoods and AVMs, on a case-by-case basis.
Recently the OCC, FDIC and the Federal Reserve proposed raising the de minimis threshold for residential properties below which appraisals are not required to complete a home loan. Currently, most homes transacting at $250K and above require an appraisal, but Federal regulators propose to raise that level to $400K. A November 30th Wall Street Journal article raises some interesting issues about the topic. They reported that the number of appraisers is down 21% since the housing crisis, but more homes require an appraiser, since more and more homes exceed the threshold each year. The article also states that these factors open the door for cheaper, faster and “largely untested” property valuations based on computer algorithms, also known as Automated Valuation Models (AVMS).
At AVMetrics, we have been continuously testing AVMs for over 15 years, so we’ve seen how they’ve performed over time. As an example, the accompanying chart shows model performance accuracy as measured by mean absolute error, a statistical metric of valuation error. We utilize many statistical measures of evaluating model accuracy and precision, and they all show significant improvement in AVMs over time. And, as these automated tools get better and the workforce of appraisers continues to shrink, the FFIEC members’ proposed change seems warranted, but that doesn’t mean they don’t have their critics.
Ratish Bansal of Appraisal Inc was quoted in The Journal describing the state of AVMs as “a wild, wild West,” inviting, “abuse of all kind.” Furthermore, he contrasts that with the voluminous regulatory standards covering the use of appraisals.
We note much of those voluminous standards represent nearly the same quality control that was in place before the Credit Crisis. In other words, appraisals are not a guarantee against collateral risk. They are simply one tool in the toolbox – an effective, but comparatively time consuming and expensive tool. Also of note, far from being the “wild, wild west,” AVMs are also governed by regulators, most notably, Appendix B of the Appraisal and Evaluation Guidelines (OOC 2010-42) and Model Risk Management guidance (OCC 2011-12). These regulatory guidelines require that AVM developers be qualified, users of AVMs use robust controls, incentives be appropriate, and models be tested regularly and thoroughly with out-of-sample benchmarks. They require documentation of risk assessments and stipulate that a Board of Directors must oversee the use of all models. In other words, if AVMs were the “the wild, wild west” they would be rooted in a town with oversight of the legendary Wyatt Earp.
My strong feeling is that appraisals should not be a sole and exclusive tool when evaluations can be effectively employed in appropriate, lower-risk scenarios. Appraisers are a valuable and limited resource, and they should be employed at (to use appraisal terminology) their highest and best use. Trying to be a “manual AVM” is not the highest and best use of a highly qualified appraiser. Their expertise should be focused on the qualitative aspects of property valuation such as the property condition and market and locational influences. They should also be focused on performing complex valuation assignments in non-homogeneous markets. AVMs do not capture and analyze the qualitative aspects of a property very well, and they still stumble in markets with highly diverse house stock or houses with less quantifiable attributes such as view properties.
However, several companies are developing ways of merging the robust data processing capabilities of an AVM with the qualitative assessment skills of appraisers. Today, these products typically use an AVM at their core and then satisfy additionally required evaluation criteria (physical property condition, market and location influences) with an additional service. For example, a lender can wrap a Property Condition Report (PCR) around the AVM and reconcile that data in support of a lending decision. This type of “Hybrid valuation” is on the track we’re headed down. Many companies have already created these types of products for commercial and proprietary use.
We at AVMetrics believe in using the right tool for the job, and we believe there is a place for automated valuations in prudent lending practices. We think the smarter approach would be to marginally raise the de minimis threshold, but simultaneously to provide additional guidance for considering other aspects of a lending decision, specifically, collateral considerations and eligibility criteria for appraisal exemptions such neighborhood homogeneity, property conformity, market conditions and more.
On Monday, June 26, 2017 the Appraisal Institute’s Northern California Chapter hosted an educational seminar in Oakland, CA, with Lee Kennedy as an invited expert. The panel was moderated by Paul E. Chandler, MAI, and Lee’s co-panelists were Michael Simmons of AXIS Appraisal Management and Todd Krell of CrossCheck Compliance. The topic of this educational session was Third Party Vendors, Tools and Compliance:The Role of AMCsandAVMs in the Appraisal Process. The panel was part of the Commercial and Residential Symposium entitled The Role of Valuation Experts in the Current Regulatory Environmentand provided both state (7hrs) and Appraisal Institute Continuing Education.
Appraiser and Appraisal Management is at the heart of quality loan production. How does a lender know it is using competent appraisers providing quality reports? Policies for the empanelment of appraisers, procurement of vendors and review and quality control of assignments must be documented, managed and audited. An expanding selection of data and technology options are available to lenders to manage all aspects of collateral due diligence. Tools include fully integrated loan origination systems, appraisal management platforms and robust data and review tools. This session will review how to find, screen and manage different third party providers, including AMCS, AVM sellers and QA compliance firms.