Author: AVMetrics

AO-41 and the Real Technology Question Appraisers Must Answer

The hands of a real estate professional working at a desk, reviewing pictures and models while taking notes.Judgment, Independence, and the Role of Testing in a Black-Box World

The Appraisal Standards Board’s proposed Advisory Opinion 41 (AO-41), Use of Technology in an Appraisal or Appraisal Review Assignment, has generated thoughtful—and in some cases pointed—discussion across the appraisal and collateral-risk communities. Much of that discussion centers on what AO-41 does not do: it does not define “technology,” it does not distinguish sharply between process tools and product tools, and it does not resolve long-standing tensions in USPAP between established practice and emerging methods. Those critiques are valid. But they also risk missing what AO-41 is really trying to accomplish. In our view, AO-41 is not about endorsing new technology, nor is it about forcing appraisers to become data scientists or software engineers. It is about how appraisers demonstrate professional judgment and competency when technology—especially opaque, third-party technology—becomes unavoidable. That problem is not new. What is new is its scale.

We’ve Seen This Movie Before
Many appraisers will recognize the pattern. When multiple regression analysis (MRA) entered mainstream appraisal education, it was often presented as a way to produce mathematically precise, “market-supported” adjustments. In practice, MRA worked well in some markets and poorly in others. The issue was not regression itself—it was that appraisers were encouraged to use it without sufficient conceptual grounding in when its results were meaningful and when they were not. The result was often false confidence rather than better judgment. AO-41 reflects a similar inflection point—this time driven by AVMs, machine learning, computer vision, and generative AI. The tools are more powerful, more opaque, and far more client-driven than before. But the professional obligation has not changed: only the appraiser produces assignment results.

AVMs, AI, and the Accountability Gap
One criticism raised in recent commentary is that AVMs are not subject to USPAP, are not transparent, and operate based on lender-defined scope and inputs. All of that is true. But it is precisely why AO-41 exists. AO-41 does not attempt to pull AVMs under USPAP. Instead, it forces an uncomfortable but necessary question: What does competent reliance look like when the mechanics of the tool are outside the appraiser’s control? AO-41 answers that question indirectly. It makes clear that appraisers are not required to understand or replicate algorithms—but they are required to understand enough to evaluate relevance, limitations, and credibility for the intended use. That is a judgment problem, not a coding problem.

Independent Testing as a Competency Enabler
This is where the industry conversation needs to mature. For opaque tools, competency cannot reasonably come from inside the model. It must come from external, objective evidence of how the tool behaves. Independent, third-party testing—conducted outside the appraisal assignment—can provide exactly that context:

*   historical accuracy and dispersion,
*   stability across markets, price tiers, and property types,
*   known limitations or failure modes, and
*   awareness of differential performance that may raise fair housing concerns.

Importantly, independent testing does not replace appraisal analysis or judgment. It produces informational evidence, not assignment results. It helps appraisers answer a practical AO-41 question: Is reliance on this tool reasonable here, or should it be limited—or avoided altogether? Or as is our motto here at AVMetrics… “The best thing an AVM can tell you is when NOT to use it”
This framing is fully consistent with AO-41’s core principles and with the Interagency AVM Quality Control Standards, which emphasize ongoing monitoring of AVM accuracy, reliability, and potential bias. Appraisers are not being asked to perform fair lending analysis—but awareness of model behavior across market segments is now inseparable from credibility.

Education, Not Enforcement
Another concern raised in recent commentary is that AO-41 risks merging new tools into old expectations and legacy education. That concern is well taken. In our opinion, USPAP has always struggled to balance encouragement of new methods with deference to established practice. The path forward is not more prescriptive rules. It is better education and clearer boundaries. Appraisers do not need to know how an AVM or AI model works internally. But they should be able to explain, in plain language:

*   why a tool was appropriate (or not) for a specific assignment,
*   how its output was evaluated for reasonableness, and
*   why reliance was full, limited, or declined.

If that explanation cannot be made clearly—“to a sixth grader,” as one educator recently put it—then reliance probably wasn’t appropriate.

What AO-41 Is Really Signaling
AO-41 is not a referendum on technology. It is a signal that the profession needs:

*   clearer educational pathways,
*   shared reference points for evaluating opaque tools, and
*   realistic expectations about what appraisers are—and are not—being asked to understand.

If the exposure process leads to broader recognition that independent testing and education are necessary supports for professional judgment—not substitutes for it—then AO-41 will have served a useful purpose, even as its language continues to evolve. That conversation is exactly what the exposure draft process is meant to surface. And it is one the appraisal and collateral-risk communities should continue—carefully, constructively, and with judgment front and center.

Why AVMetrics’ Fair Housing Methodology Surpasses Vendor Approaches

The Fair Housing analyses published by AVM vendors such as Veros and Clear Capital represent important early efforts to evaluate potential disparate impact in automated valuation models. These studies contribute useful perspective to an evolving area of the industry, but they are inherently constrained by scope, methodology, and—most importantly—objectivity. Their findings are self-assessments rather than independent evaluations: each vendor analyzes only its own model, using its own data and assumptions, and typically concludes that little to no bias exists, which limits their usefulness for broader risk management and supervisory purposes.

Regulated institutions, however, must operate under much more rigorous expectations. The new Interagency AVM Quality Control Standards require lenders to demonstrate that AVMs used in credit decisions are independently validated and fairly applied. This standard cannot be meaningfully satisfied by vendor-authored whitepapers alone.

AVMetrics’ methodology is designed specifically to meet these supervisory needs. Rather than focusing on individual model performance within internally defined samples, AVMetrics conducts standardized, national-level testing across 700,000 to 1 million transactions each quarter. This approach ensures that fairness conclusions reflect real-world market diversity and enables consistent evaluation across models, markets, and time.

AVMetrics independently tests eight different dimensions in which AVMs could potentially disadvantage protected classes, including coverage rates (hit rate), accuracy, precision, and other core performance measures. To support statistically meaningful comparisons, AVMetrics has invested in neighborhood-level demographic data, enabling analysis across comparison neighborhoods- avoiding the masking effects of county-level aggregation while preserving sufficient sample size beyond census-tract granularity.

Further, AVMetrics applies Standardized Mean Difference (SMD)—the same effect-size metric commonly used in fair-lending analytics—providing a clear measure of whether disparities are material, not simply detectable. In contrast, many model-specific analyses typically use raw accuracy differences or simple correlations, which offer no interpretive scale for examiners assessing practical significance. AVMetrics’ approach produces metrics that are grounded in established methodology, interpretable, and defensible.

As the next generation of AVMs incorporates increasingly complex machine learning and generative AI techniques, vendor-driven testing becomes even less transparent. AVMetrics’ methodology is intentionally model-agnostic: we can evaluate the fairness and performance of traditional hedonic models, GBDT-based systems, deep learning models, or hybrid AI architectures with equal rigor. As models become more opaque, the need for a neutral, independent evaluator becomes increasingly essential.

In contrast to analyses intended to provide general assurance around individual models, AVMetrics delivers regulatory-grade evidence. By identifying how model risk and policy risk can interact to generate disproportionate impacts—an expectation embedded in the new regulatory framework—our testing equips lenders with the actionable intelligence needed to inform, calibrate, and justify their risk-policy decisioning.

Comparison of AVMetrics' testing on multiple dimensions describing how AVMetrics' testing is broader, more independent, more rigorous, and more useful in no small part because it objectively compares models to each other.

 

As regulatory expectations around AVM fairness continue to mature, institutions must move beyond model-specific assurances toward independent, repeatable, and scalable evaluation frameworks. AVMetrics’ fair housing methodology is purpose-built to meet these expectations, providing lenders with nationally consistent, statistically rigorous, and model-agnostic evidence of AVM performance and potential disparate impact. By aligning testing design with supervisory standards and real-world production environments, AVMetrics enables institutions not only to identify and manage fair-lending risk, but also to demonstrate compliance with confidence in increasingly complex valuation ecosystems.

What DOJ’s Disparate-Impact Rollback Doesn’t Change About AVM Fairness

The Department of Justice’s recent move to eliminate disparate-impact liability under its Title VI regulations has raised understandable questions across housing and credit markets. But for lenders, GSE partners, and valuation providers preparing for the AVM Quality Control Standards, one thing is clear:

The obligations around AVM fairness haven’t gone away.

The interagency AVM rule—effective October 1, 2025—explicitly requires institutions to establish policies, practices, procedures, and control systems to ensure AVMs comply with applicable nondiscrimination laws. That requirement remains fully intact. So do the supervisory expectations of prudential regulators, FHFA, and CFPB around managing fair lending and bias risk in automated systems, whether or not DOJ narrows its enforcement tools under Title VI.

Even with political shifts, the industry continues to operate under:

  • The Fair Housing Act, where disparate-impact liability is still recognized by the Supreme Court.
  • ECOA/Reg B fair lending expectations, which continue to incorporate statistical evidence of adverse outcomes.
  • Growing scrutiny of AI and automated valuation, highlighted by recent GAO recommendations urging clearer guidance on emerging technology risks.

In short: Regulatory pendulums swing—but AVM fairness risk remains.

Institutions still need independent, statistically rigorous testing to understand whether their AVMs or cascades produce unjustified disparities, and to document business justification and alternatives when they arise. That’s where AVMetrics’ fifth-factor validation fits the bill. Our analysis is national, extensive, independent, thorough, examiner-ready and tested for significance.

Introducing PTM™ – Revolutionizing AVM Testing for Accurate Property Valuations

When it comes to residential property valuation, Automated Valuation Models (AVMs) have a lurking problem. AVM testing is broken and has been for some time, which means that we don’t really know how much we can or should rely on AVMs for accurate valuations.

Testing AVMs seems straightforward: take the AVM’s estimate and compare it to an arm’s length market transaction. The approach is theoretically sound and widely agreed upon but unfortunately no longer possible.

Once you see the problem, you cannot unsee it. The issue lies in the fact that most, if not all, AVMs have access to multiple listing data, including property listing prices. Studies have shown that many AVMs anchor their predictions to these listing prices. While this makes them more accurate when they have listing data, it casts serious doubt on their ability to accurately assess property values in the absence of that information.

Three months of data showing estimates by three AVMs for a single property in Austin, TX.
Three AVMs valuing a home before and after it was listed in the MLS from Realtor.com’s RealEstimateSM.

All this opens up the question: what do we want to use AVMs for? If all we want is to get a good estimate of what price a sale will close at, once we know the listing price, then they are great. However, if the idea is to get an objective estimate of the property’s likely market value to refinance a mortgage or to calculate equity or to measure default risk, then they are… well, it’s hard to say. Current testing methodology can’t determine how accurate they are.

But there is promise on the horizon. After five years of meticulous development and collaboration with vendors/models, AVMetrics is proud to unveil our game-changing Predictive Testing Methodology (PTM™), designed specifically to circumvent the problem that is invalidating all current testing. AVMetrics’ new approach will replace the current methods cluttering the landscape and finally provide a realistic view of AVMs’ predictive capabilities.1

At the heart of PTM™ lies our extensive Model Repository Database (MRD™), housing predictions from every participating AVM for every residential property in the United States – an astonishing 100 to 120 million properties per AVM. With monthly refreshes, this database houses more than a billion records per model and thereby offers unparalleled insights into AVM performance over time.

But tracking historical estimates at massive scale wasn’t enough. To address the influence of listing prices on AVM predictions, we’ve integrated a national MLS database into our methodology. By pinpointing the moment when AVMs gained visibility into listing prices, we can assess predictions for sold properties just before this information influenced the models, which is the key to isolating confirmation bias. While the concept may seem straightforward, the execution is anything but. PTM™ navigates a complex web of factors to ensure a level playing field for all models involved, setting a new standard for AVM testing.

So, how do we restore confidence in AVMs? With PTM™, we’re enabling accurate AVM testing, which in turn paves the way for more accurate property valuations. Those, in turn, empower stakeholders to make informed decisions with confidence. Join us in revolutionizing AVM testing and moving into the future of improved property valuation accuracy. Together, we can unlock new possibilities and drive meaningful change in the industry.

1The majority of the commercially available AVMs support this testing methodology, and there is over two solid years of testing that has been conducted for over 25 models.

Feds to Lenders: Take AVMs Seriously

Regulators are signaling that they are going to be looking at how AVMs are used and whether lenders have appropriately tested them and continuously monitor them for valuation discrimination. This represents a change in the focus on AVMs and the need for all lenders to focus on AVM validation to avoid unfavorable attention from government regulators.

On Feb 12, the FFIEC issued a statement on examinations from regulators. It specifically stated that it didn’t represent a change in principles, nor a change in guidance, and not even a change in focus. It was just a friendly announcement about the exam process, which will focus on whether institutions can identify and mitigate bias in residential property valuations.

Law firm Husch Blackwell published their interpretation a week later. Their analysis included consideration of the June 2023 FFIEC statement on the proposed AVM quality control rule, which would include bias as a “fifth factor” when evaluating AVMs. They interpret these different announcements as part of a theme, an extended signal to the industry that all valuations, and AVMs in particular, are going to receive additional scrutiny. Whether that is because bias is as important as quality or because being unbiased is an inherent aspect of quality, the subject of bias is drawing attention, but the result will be a thorough examination of all practices around valuation, including AVMs, from oversight to validation, training, auditing, etc.

AVM quality has theoretically been an issue that could be enforced by regulators in some circumstances for over a decade. What we’re seeing is not just an expansion from accuracy into questions of bias. We’re also seeing an expansion from banks into all lenders, including non-bank lenders. And, they are signaling that examinations will focus on bias, which is an expansion from the theoretical requirement to an actual, manifest, serious requirement.

#1 AVM in Each County Updated for Q4 2023

Every quarter we analyze all the top AVMs and compile the results. Click on this GIF to see the top AVM in each county for each quarter. As you watch the quarters change, you can see that the colors representing the top honors change frequently.

A gif showing the most recent 8 quarters of AVM performance with the #1 AVM in each county represented by a unique color
The number 1 AVM in each county for the last two years. Each AVM is represented by a unique color.

The main point is how frequently AVM performance changes. That should be no surprise, since market conditions change and AVM’s have different strengths and tendencies. Phoenix has more tract housing, and some AVMs are optimized for that. Cities in the northeast have more row housing, and some models are better there. But AVMs also change – a lot. Whole new models are introduced, but every model is constantly being improved as builders add new data feeds and use new techniques to get better results (with respect to new techniques, over at the AVMNews, we curate articles about AVMs, and we highlight several hundred new research articles about AVMs every year).

Q4 Change Highlights

As ever, if you watch a part of the map, you’ll see several changes. But, in Q3, as markets stabilized at higher interest rate levels, we saw a changing of the guard. Here are some places to watch:

  1. On the the west coast, leadership changed in Los Angeles County and Seattle’s King County.
  2. Most of the counties of Atlanta, GA changed, as did the main counties of Charlotte, NC.
  3. Some less-populated areas had almost wholesale changes, such as Idaho, the Dakotas, Montana, Colorado, Iowa and rural Michigan (but not New Mexico or Utah).

Takeaways

  1. Things change – a lot. Don’t rely on the results from last year or earlier this year. Heck, you can’t even trust last quarter! We compile these results quarterly, but our testing is non-stop, and we can produce new optimizations monthly based on a rolling 3 months or any other time period. Often, 3 months’ of data are required to get a large enough sample in smaller regions, but we can slice it every way imaginable.
  2. Use more than one AVM. It’s not obvious from a map showing just one AVM in each county, but if you think about what’s going on to produce these results, you’ll realize that AVMs have different strengths and there are a lot of them climbing all over each other to get to the top of the ranking. So, when you’re valuing a particular property, you just don’t know if it will be a good candidate for even the best AVM. When that AVM produces a result with low confidence, there’s a very good chance that another AVM will produce a reasonable estimate. Why not be able to take three, four or five bites at the apple?

#1 AVM in Each County Updated for Q3 2023

Every quarter we analyze all the top AVMs and compile the results. Click on this GIF to see the top AVM in each county for each quarter. As you watch the quarters change, you can see that the colors representing the top honors change frequently.

map of the united states cycling between 8 images showing a different color for each AVM that is #1 in the county. The colors change rapidly and substantially indicating a very dynamic market where leadership as "the best AVM" changes a lot.
Q3 2023 update

The main point is how frequently AVM performance changes. That should be no surprise, since market conditions change, and AVM’s have different strengths and tendencies. Phoenix has more tract housing, and some AVMs are optimized for that. Cities in the northeast have more row housing, and some models are better there. But AVMs also change – a lot. Whole new models are introduced, but every model is constantly being improved as builders add new data feeds and use new techniques to get better results (with respect to new techniques, over at the AVMNews, we curate articles about AVMs, and we highlight several dozen new research articles about AVMs every year).

Q3 Change Highlights

As ever, if you watch a part of the map, you’ll see several changes. But, in Q3, as markets stabilized at higher interest rate levels, we saw a changing of the guard. Here are some places to watch:

  1. On the the west coast, leadership changed in Orange County and many smaller counties.
  2. Several less-populated states had almost wholesale changes, such as the Dakotas, Montana, New Mexico and Mississippi.
  3. Dozens of suburban counties changed around other metro areas, from Houston and Dallas to Chicago and D.C.

Takeaways

Things change – a lot. Don’t rely on the results from last year or earlier this year. Heck, you can’t even trust last quarter! We compile these results quarterly, but our testing is non-stop, and we can produce new optimizations monthly based on a rolling 3 months or any other time period. Often, 3 months’ of data are required to get a large enough sample in smaller regions, but we can slice it every way imaginable.

Use more than one AVM. It’s not obvious from a map showing just one AVM in each county, but if you think about what’s going on to produce these results, you’ll realize that AVMs have different strengths and there are a lot of them climbing all over each other to get to the top of the ranking. So, when you’re valuing a particular property, you just don’t know if it will be a good candidate for even the best AVM. When that AVM produces a result with low confidence, there’s a very good chance that another AVM will produce a reasonable estimate. Why not be able to take three bites at the apple?

Our Perspective on Brookings’ AVM Whitepaper

As the publisher of the AVMNews, we felt compelled to respond to the Brookings’ very thorough whitepaper on AVMs (Automated Valuation Models) published on October 12, 2023, and share our thoughts on the recommendations and insights presented therein.

First and foremost, I would like to acknowledge the thoroughness and dedication with which Brookings conducted their research. Their whitepaper contains valuable observations, clear explanations and wise recommendations that unsurprisingly align with our own perspective on AVMs.

Here’s our stance on key points from Brookings’ whitepaper:

  1. Expanding Public Transparency: We wholeheartedly support increased transparency in the AVM industry. In fact, Lee’s recent service on the TAF IAC AVM Task Force led to a report recommending greater transparency measures. Transparency not only fosters trust but also enhances the overall reliability of AVMs.
  2. Disclosing More Information to Affected Individuals: We are strong advocates for disclosing AVM accuracy and precision measures to the public. Lee’s second Task Force report also recommended the implementation of a universal AVM confidence score. This kind of information empowers individuals with a clearer understanding of AVM results.
  3. Guaranteeing Evaluations Are Independent: Ensuring the independence of evaluations is paramount. Compliance with this existing requirement should be non-negotiable, and we fully support this recommendation.
  4. Encouraging the Search for Less Discriminatory AVMs: Promoting the development and use of less discriminatory AVMs aligns with our goals. We view this as a straightforward step toward fairer AVM practices.

Regarding Brookings’ additional points 5, 6, and 7, we find them to be aspirational but not necessarily practical in the current landscape. In the case of #6, regulating Zillow, it appears that existing and proposed regulations adequately cover entities like Zillow, provided they use AVMs in lending.

While we appreciate the depth of Brookings’ research, we would like to address a few misconceptions within their paper:

  1. Lender Grade vs. Platform AVMs: We firmly believe that there is a distinction between lender-grade and platform AVMs, as evidenced by our testing and assessments. Variations exist not only between AVM providers but also within the different levels of AVMs offered by a single provider.
  2. “AVM Evaluators… Are Not Demonstrably Informing the Public:” We take exception to this statement. We actively contribute to public knowledge through articles, analyses, newsletters (AVMNews and our State of AVMs), quarterly GIF, a comprehensive Glossary, and participation in industry groups, task forces. We also serve the public by making AVM education available, and we would have been more than willing to collaborate or consult with Brookings during their research.

But, we’re obligated not to just give away our analysis or publish it. Our partners in the industry provide us their value estimates and we provide our analysis back to them. It’s a major way in which they improve, because they’re able to see 1) an independent test of accuracy, and 2) a comparison to other AVMs. They can see where they’re being beaten, which means opportunity for improvement. But, in order to participate, they require some confidentiality to protect their IP and reputation.

We should comment on the concept of independence that Brookings emphasized. Independent evaluation is exceedingly important in our opinion, as the only independent AVM evaluator. Brookings mentioned in passing that Mercury is not independent, but they also mentioned Fitch as an independent evaluator. We agree with Brookings that a vendor who also sells, builds, resells, uses or advocates for certain AVMs may be biased (or may appear to be biased) in auditing them; validation must be able to “effectively challenge” the models being tested.

We do not believe Fitch satisfies ongoing independent testing, validation and documentation of testing which requires resources with the competencies and influences to effectively challenge AVM models. Current guidelines require validation to be performed in real-world conditions, to be ongoing, and to be reported on at least annually.  When there are changes to the models, the business environment or the marketplace, the models need to be re-validated.

Fitch’s assessment of AVM providers is focused on each vendor’s model testing results, review of management and staff experience, data sourcing, technology effectiveness and quality control procedures. Fitch’s methodology of relying on analyses obtained from the AVM providers’ model testing results would not categorize them as an “independent AVM evaluator,” as reliance on testing done by the AVM providers themselves does not meet any definition of “independent” per existing regulatory guidance. AVMetrics is in no way beholden to the AVM developers or the resellers in any way; we draw no income from selling, developing, or using AVM products.

For almost two decades, we have continued to test AVMs against hundreds of thousands (sometimes millions) of transactions per quarter and use a variety of techniques to level the playing field between AVMs. We provide detailed and transparent statistical summaries and insights to our newsletter readers, and we publish charts that give insights into the depth and thoroughness of our analysis, whereas we have not observed this from other testing entities. Our research spanning eighteen years shows that even overall good-preforming models are less reliable in certain circumstances, so one of the less obvious risks that we would highlight is reliance on a “good” model that is poor in a specific geography, price level or property type. Models should be tested in each one of these subcategories in order to assess their reliability and risk profile. Identifying “reliable models” isn’t straightforward. Performance varies over time as market conditions change and models are tweaked. Performance also varies between locations, so a model that is extremely reliable overall may not be effective in a specific region. Furthermore, models that are effective overall may not be effective at all price levels, for example: low-priced entry-level homes or high-priced homes. Finally, very effective models will also produce estimates that they admit have lower confidence scores (and higher FSDs), and which should in all prudence be avoided, but without adequate testing and understanding may be inadvertently relied upon. Proper testing and controls can mitigate these problems.

Regarding cascades, the Brookings’ paper leans on cascades as an important part of the solution for less discriminatory AVMs. We agree with Brookings: a cascade is the most sophisticated way to use AVMs.  It maximizes accuracy and minimizes forecast error and risk. By subscribing to multiple AVMs, you can rank-order them to choose the highest performing AVM for each situation, which we call using a Model Preference Table™. The best possible AVM selection approach is a cascade, which combines that MPT™ with business logic to define when an AVM’s response is acceptable and when it should be set aside for the next AVM or another form of valuation.  The business logic can incorporate the Forecast Standard Deviation provided by the model and the institution’s own risk-tolerance to determine when a value estimate is acceptable.

Mark Sennott (industry insider) recently published a whitepaper describing current issues with cascades, namely that some AVM resellers will give favorable positions to AVMs based on favors, pricing or other factors that do NOT include performance as evaluated by independent firms like AVMetrics. This goes to the additional transparency for which Brookings’ advocates. We’re all in favor.

We actually see a strong parallel between Mark Sennott’s whitepaper and the Brookings’ paper. Brookings makes the case to regulators, whereas Sennott was speaking to the AVM industry, but both of them argue for more transparency and responsible leadership by the industry. Sennott appears to be very prescient, in retrospect.

In order to ensure that adequate testing is done regularly we recommend that a control be implemented to create transparency around how the GSE’s or other originators are performing their testing. This could be done in a variety of ways. One method might require the GSE or lending institution to indicate their last AVM testing date on each appraisal waiver. Regardless of how it’s done, the goal would be to create a mechanism that would increase commitment to appropriate testing. The GSE’s could provide a leadership role by demonstrating how they would like lending institutions to demonstrate their independent AVM testing as required by OCC 2010-42 and 2011-12.

In conclusion, we appreciate Brookings’ dedication to asking questions and providing perspective on the AVM industry. We share their goals for transparency, fairness, and accuracy. We believe that open dialogue and collaboration by all the valuation industry participants are the keys to advancing the responsible use of AVMs.

We look forward to continuing our contributions to the AVM community and working toward a brighter future for this essential technology.