Home Brokers & Agents Evaluating Professionals: Imperfect Solution for an Imperfect World

Evaluating Professionals: Imperfect Solution for an Imperfect World

331
18
SHARE
Judgment Day Cometh
Judgment Day Cometh

Agent ratings are back in the conversation, thanks to this scintillating op/ed by Kris Berg (link is for Inman premium subscribers only) who is one of the best writers in real estate today.  I have written about this topic before (here and here) and it continues to fascinate and puzzle me still.

Kris’s point essentially boils down to the fact that providing real estate brokerage service is one fraught with emotion, with unpredictable clients who don’t know what it is that a realtor actually does for them, who cannot make rational evaluations of how good or bad an agent really is.  Quantitative metrics don’t provide accurate ratings, in Kris’s view, because those focus on production rather than service.  Customer surveys are flawed because customers are ignorant on the one hand, and nuts on the other hand, and are too often influenced by how the transaction itself went down rather than how the realtor performed.

All of her points are, I think, valid and true.

Sadly, they are all irrelevant to some extent.

Fact is, agent ratings are already here in places like Yelp and Angie’s List.  Consumers will talk, will evaluate, and will rate realtors (as they do every other service provider) on their FaceBook pages, on blogs, on websites, and with each other in person.  It’s going to happen whatever the merits of such ratings.

The real issue, then, isn’t whether such rating systems are good or bad or inaccurate or legitimate, but who will do the rating and how they will do it.

Evaluating Professionals

In an ideal world, evaluating the quality of performance by a professional is something that will be done only by other disinterested professionals.  I remember thinking since my law school days that medical malpractice suits should only be heard by a jury comprised entirely of doctors.  Only they can say whether a procedure was or was not done correctly, whether negligence was involved or not, and whether the actions of a particular physician was a judgment call or worthy of liability.

Jury of your peers?
Jury of your peers?

When our two year old had to go in for surgery, my wife mentioned the surgeon’s name on FaceBook.  A friend of a friend on FaceBook, who is a doctor, posted that our doctor would be the surgeon he himself would use if his child needed similar surgery.  The comfort of such an evaluation is immense.  After all, if you really want to know who the great doctors are, look for the ones that other doctors go to for their medical needs.

Similarly, it may be the case that only other realtors can accurately rate a realtor’s performance in the course of a transaction.

The best rating system, then, is one in which every realtor would rate his counterpart after each transaction and submit a detailed report as to the reasons for the rating.  A website would make it easy for consumers to view the composite score of each realtor, rated by his or her peers, with detailed ratings, trend over time, and reasons for why he got the rating he did on a transaction-by-transaction basis.  And in this ideal world, every single realtor would be so professional, so dedicated to upholding the integrity of the profession that he would not hesitate for a moment to rate other realtors with stern rigor.

Sadly, we do not live in such a world.  A jury filled with doctors sitting on a medical malpractice case may not exactly be what one might call disinterested.  Attorneys who belong to the same Bar Association and rely on each other to refer business back and forth might be motivated by factors other than strict truth in evaluating a fellow attorney.  And realtors are not saints.

Furthermore, real estate is less amenable to a peer review system than other professions due to the highly local nature of the business itself.  Could a doctor in New Jersey review the work of a doctor in California?  Probably so — the human body is not different in NJ than it is in CA (at least on the inside… on the outside is a whole different matter).  But the real estate market in one town is different than the real estate market in the next town over; what might be best practice in one area might be a horrible idea in a different area.  Which means that the only peers that could accurately rate a realtor are the other realtors in the same market.  This is precisely the same group of people that a realtor would have to work with over and over again, and the same group of people who are constantly referring business back and forth to each other.  The incentive to be critical of each other, then, is extremely low; the incentive to puff air at each other is extremely high.

So peer ratings is probably not going to work for real estate.

Consumer Reports

What about consumers?

Kris has already pointed out many of the pitfalls in consumer ratings of realtor performance.  Too many consumers are emotionally invested and can’t think straight when it comes to their homes.  In some cases, a horrible performance by a realtor may get glossed over because the consumer got her dream home (even if she overpaid, got screwed in the negotiation, and so on).  A great performance gets downgraded because the transactions went too smoothly, and the consumer has no idea just how masterful a job the agent did.

Fact is, most consumers do not engage in a substantial real estate transaction often enough to be able to evaluate an agent’s performance accurately.  The average consumer buys or sells once every seven years; and in between, she’s not thinking about real estate.  There is no basis for forming an accurate opinion.

The result, however, is not what Kris fears: negative ratings by consumers who don’t know what they’re talking about.  The real result is what we see on the one brokerage website that publishes all of the consumer reviews: far too many positives.

This is the Deals and Reviews page for Leonardo Montenegro, a Redfin Partner Agent, from California.  I picked him entirely at random, but having looked through quite a few of the agents, the pattern is the same.  Of the 30 reviews on the site, 17 of them are Five Star, 1 is a Four Star, and the remaining are “no response” or “Agent did not provide contact information”.  And the comments are simply glowing.  For example:

Leonardo was very professional. We followed his advise and got our house. There were multiple offers on it but his fast, intuitive thinking was the reason we won. I highly recommend this young man.

Maybe the reason why this consumer got the house was because Leonardo recommended overpaying by 20% because he was in cahoots with the seller.  (NOTE: probably not true, but illustrating a point here! Leonardo most likely is exactly the consummate professional his consumers paint him to be.)  How could anyone know, except the selling agent on the other side of the deal?

Everybody Wins!
Everybody Wins!

Now, these stellar ratings and glowing comments would be great news for Leonardo… except for the fact that every other Redfin Partner Agent in the are also has similar stellar ratings and glowing reviews.  As a consumer looking for a realtor in the Coachella Valley area, do these kinds of ratings provide any guidance at all?

The only thing a consumer who isn’t completely naive might read into is the non-responses.  Ron Jesser, an agent in the same market area, had 29 reviews but only 12 responses.  So maybe Leonardo’s 60% response rate compared to Ron’s 41% response rate should tell me that Leonardo’s the better realtor?  Yeah, highly dubious, but what else can you go by?

For many other professions or service providers, consumer reports are invaluable because consumers use those professionals more often.  A consumer rating of a dentist, for example, might be valuable since he may have been going to a dentist every six months for the past twenty years.  A car mechanic could be rated and evaluated more accurately by a consumer who’s brought the car in four times a year for various tune-ups, oil changes, wheel alignment, and the occasional fender bender.

The sad reality is, however, that in the absence of compelling and credible alternatives, consumer reports are what we will end up with.

Why Not Metrics?

So if neither professional peer review nor consumer reports can yield any insight, why not quantitative metrics?

There has been talk since Inman San Francisco, when Diverse Solutions presented their Agent Scorecard concept, of brokerages and MLS’s implementing some sort of a data-driven agent rating system.

Kris takes exception to these data-driven metrics because they don’t reveal all of the details behind a transaction.  And again, she has a point.  Days on market may not be a reflection of an agent’s skill as much as it is the local market conditions.  Number of transactions doesn’t mean that the customer was served with superior skill — or any skill at all for that matter.

And yet… there is something appealing about numbers.

The closest analogy I could think of is comparing NFL quarterbacks.  Who is better, Peyton Manning or Tom Brady?

Peyton has thrown for 48,799 yards to date, with 354 TD’s and a lifetime QB rating of 95.2; Tom has thrown for 29,495 yards with 217 TD’s and a lifetime rating of 93.6 with 63 fewer games played.  Peyton averages 7.7 yards per throw, while Tom averages 7.3 yards per throw; Peyton averages 1.9 TD’s per game, while Tom averages 1.8 TD’s per game, but Peyton also throws .94 INT’s per game while Tom only throws .75 INT’s per game.

Behind those numbers, however, are all manner of details missing.  Peyton Manning has played almost the entirety of his career with quality future Hall of Fame wide receivers like Marvin Harrison and Reggie Wayne, while Tom Brady did not have a bona fide stud receiver until 2007 when Randy Moss and Wes Welker joined the team.  There is no way stats can capture how many times Brady has driven his team down the field in the last waning seconds of a game for a winning touchdown or a field goal.  Numbers cannot capture how Peyton commands the huddle, how Brady is like in the locker room, or how good or bad the offensive line protecting them is.

And yet… any discussion of quarterbacks will involve numbers, because they are the only objective measure we have.  Numbers and statistics are far from perfect, and they do not tell the whole story.  But they are objective and importantly, they can be compared.

Brought to real estate, perhaps it is true that DOM is not a reflection of a realtor’s skill or professionalism.  But it does mean something, doesn’t it?  If Agent A has DOM that is 40% above average, while Agent B has a DOM that is 10% below average, doesn’t that suggest something, whatever that may be?  If one realtor sits at 95% sale-to-list over a ten year career, while another realtor is at 80% sale-to-list in the same market, doesn’t that at least suggest something?

Customer service metrics like QSC and NPS may be imperfect — in fact, I’m pretty sure both of those are pretty far from perfect — but don’t they at least suggest some basis for comparison?  Redfin’s ratings page may be filled with only five star reviews, but at the very least, they exist — and the non-responses to at least suggest something about the level of service.

In particular, metrics are useful when they show significant deviation from the norm.  For example, if the average DOM in a particular market over the past 120 days is 111 days, but Agent X during the same period is selling homes in 85 days, then whatever the reason, it is clear that using Agent X is likely to get your home sold faster.  Note that I did not say “get your home sold faster for the price you wanted” — that would require also delving into sale-to-list data.  But between those two statistics, the consumer can find an agent who is above the median realtor in the market for getting homes sold faster for close to listing price.

By the same token, the consumer might find that an agent reprices homes way more than the average agent.  Perhaps the average agent in a market has two price changes before a sale, whereas Agent Z has seven price changes.  That might suggest that Agent Z either doesn’t know the market that well, or gets listings promising the moon, or uses price changes as a marketing tactic.  The full story requires detail, but the consumer can find that detail in followup questions to the agent, or by asking around.

Today, however, while the real estate professional has access to these metrics, they are hidden from view from the consumer.  Why the fear of numbers?

The Imperfect Solution for an Imperfect World

As a result, I think the real estate industry should embrace some combination of objective metrics with both peer and consumer reviews.  The specifics cannot be figured out in a blogpost, even ones as long as mine are.  But the rough outline might look something like this:

  • Key Performance Indicators from MLS data, such as price-to-list, DOM, and number of transactions.
  • Peer Reviews from the MLS.  While bias and gamesmanship can be expected, at a minimum, peer ratings would help eliminate the worst of the bunch.  There is no incentive for professionals to tolerate individuals who give the entire profession a black eye by their incompetence or unethical behavior.  I would include publishing any disciplinary actions by the local Association.
  • Consumer Reports a la Redfin.

Combine them all and we may begin to have something approaching the semblance of accuracy in realtor ratings.  As a consumer, perhaps I might compare two realtors, both of whom have glowing reviews and five star ratings across the board, then see that while one has better statistics, the other has respect from more of her peers, and go with that.  Or I might choose to go with the realtor with more transactions with a better price-to-list track record, even if a consumer has dinged him once or twice.

Provide me the consumer with as much data and as much information as you can, and let me make the decision.

Seems to me that this combined approach is probably what our imperfect world, filled with sinners rather than with saints, needs.

-rsh

SHARE
Previous articleGiving Thanks
Next articleEvaluating Professionals: Imperfect Solution for an Imperfect World
Rob Hahn
Managing Partner of 7DS Associates, and the grand poobah of this here blog. Once called "a revolutionary in a really nice suit", people often wonder what I do for a living because I have the temerity to not talk about my clients and my work for clients. Suffice to say that I do strategy work for some of the largest organizations and companies in real estate, as well as some of the smallest startups and agent teams, but usually only on projects that interest me with big implications for reforming this wonderful, crazy, lovable yet frustrating real estate industry of ours.

18 COMMENTS

  1. Rob, today combining all of these emerging systems seems pretty daunting but it will happen faster than we think. I love the idea of publishing this data and real estate agents becoming publicly accountable. Of course the resistance to these measures will be great but the walls will come down quickly.

    Cheers on a great article. Hope meet at Inman NYC

  2. In the absence of any official rating system, perhaps the profession would be best served if agents sought to perform as if they were being rated by *all* of the following: their clients, their managing broker, the other agent, the other agent's clients, the home inspector, the appraiser, the mortgage rep, the attorneys, any town officials involved…you get the picture. Earning consistently high ratings from all parties involved throughout the throes of a real estate transaction, sale after sale, can prove challenging indeed but certainly constitutes the goal we should be striving for.

    As alluded to here and elsewhere, the rating system already exists: word of mouth is age-old and, with the advent of social media, is becoming exponentially more powerful.

  3. Hmmmm. Good stuff, thoughtful points all around. An accurate rating system, as you've shared, is difficult at best. A few other random thoughts on the matter.

    Another, possibly more telling statistical fact for listing agents, in addition to Days On Market (DOM) is the agents or brokerage Sold Price to Original List Price %. In our market, The Woodlands TX, the MLS for 2009 is running around 95.7%, if an agent/brokerage has a higher average, that means their sellers are netting more money, a bottom line benefit.

    While the subject of Agent Ratings bubbles, ebbs and flows, numerous studies and personal experience shows that a high percentage 60% to 75% of the buyers and sellers only talk to 1 or 2 agents when choosing who to represent them, someone they know or someone recommended by a trusted friend. In a way, this is a grapevine, word of mouth or informal rating system (sorta).

    I believe that a external rating system for a firm like Redfin is important because the business model generates most of their business from external sources. Agents in this model aren't building or relying on a personal network for business generation, the company does that for them. In models like this (external lead generation), an external rating system would be an important factor, since the agents aren investing their time in developing repeat and referral business.

    Our local board/MLS, HAR, has a rating system in place. It's value is sorta whimpy, it's an opt-in rating system. If I an agent receives a bad review, they can exclude it or if the transaction was bloody, they can ignore sending the survey all together. Wise agents make sure that their happy clients complete the survey, the unhappy ones never see it. As long as these wink, half nod and half step measures are in place or proliferate, I'm believe consumers will not rely on them as true and accurate.

    Currently, the best real system is for an individual agent to deliver service and results that scream excellence, survey their clients, then document and broadcast their success stories and fix their shortfalls. Civilians, when concerned about an agent's bonafides (always), should request and verify references.

    It's hard to imagine a 3rd party rating system taking off, because self interest isn't strong enough (None of the 7DS is powerful enough to motivate), clients/civilians are too busy and uninterested to take the time to respond or thoughtfully rate/comment/compliment/etc. Like I said, they primarily select the agent they already know or someone recommend from a friend. Most would agree, a recommendation from a trusted friend is more valuable than what a stranger said or wrote.

    Thanks for sharing your take on what's up in RE. Cheers.

  4. One other thought. By the time someone figures out how to rate agents, it won't matter anymore.

    To discover if someone is all hat and no cattle or the real deal, as more information (Facebook + Twitter, etc.) is indexed, etc., won't it be easy to simply Google or Bing (or something new) a person's name, what you need know will be served up. If there's bad news or nothing there, you'd probably want to choose someone else anyway.

    Cheers.

  5. Rob- I like your take on all this but I think we'll be stuck with a bunch of different rating systems that all have massive flaws.

    You write “There is no incentive for professionals to tolerate individuals who give the entire profession a black eye by their incompetence or unethical behavior.”

    And yet we do. If a brokerage wanted to spend their time complaining to the state about licensing violations and to the local Realtor board about ethics violations they could easily have a full time staff. Our profession has a black eye because we tolerate too much from our peers and it is easier not to rock the boat.

    As far as metrics go, the numbers will always get played with. In my MLS I could withdraw a listing every time I get an offer, reduce the list price and reset the DOM. Wouldn't my ratios look pretty?!!

    Numbers don't always tell the story.

    I have a listing which is an estate. 3 brothers who have different motivations. DOM is huge. We have had a number of offers which have not been accepted and if I were only about DOM I would have dropped the listing. I persevere because of my duty to my client, not stats.

    Another listing sold this past year for 25% below the original list price. It was at the top of the market so comps were hard to find and DOM was long. The list to sale ratio on this house and DOM were awful – I really screwed up by these metrics. Except that it sold for 30% more than the last million dollar sale in our market. The last such sale happened in a boom market, which doesn't accurately describe the current situation.

    Maybe just choosing 1-5 stars does make more sense?

  6. Great past two posts Rob! From… family and children…… to the military….. to a quantitative approach…..Nicely put… as usual.

    Happy Thanksgiving to you and your family!

  7. Actually an agent cannot eliminate the bad review from their list of total reviews at HAR Ken. They can turn off all of their reviews if they like, but they can't eliminate only the bad ones.

  8. While we would like to discount the voice of the “inexperienced” consumer who only buys or sells a home 1 time every 7 years I don't believe it's that simple. Consumers do listen to each other even if they only purchase a home occasionally. Do we qualify every other review we buy by the qualifications of the reviewer? Do we check how many books a reviewer has read on the subject of the solar system before we read their review? Do we ask how many children a person has raised before we read their review about a toy? The whole process of consumer ratings is imperfect at best, but that doesn't stop anyone from reading them or more importantly, making purchase decisions based on them.

    While I think the idea of sharing peer feedback would be interesting in concept, I think you underestimate the amount of gamesmanship that would be demonstrated if consumers had access to the information. I do think, however, that peer ratings among realtors is an excellent idea to be shared only among the real estate community. Peer transparency may actually help to increase the level of integrity and professionalism in the process. If an agent couldn't hide from her bad behavior it may eliminate some of the bad practices that give the real estate industry a bad name. It may also help brokers eliminate the low integrity/professionalism performers because they will better understand how an agent is hurting their brand. Brokers need better data other than simply the number of transactions completed to understand the true impact of each of their agents.

    To start, it may make sense simply to share agent peer feedback with the agent being reviewed and possibly his/her broker. The agent could then decide if he would like to share the feedback with the entire real estate community. Make the process of participation voluntary just like the public-facing systems like QSC do. The absence of some in the process may point out their “inadequacies” without having to point any fingers. A system like this should also invite “counterpoint” feedback from the agent being reviewed. Fellow agents are educated enough to know when the ratings and the counterpoint feedback are legitimate. Consumers may not be. I believe this type of transparency would definitely help agents perform better.

    I don't know why this is such a contentious topic. We just need to get on with it and adopt products that will allow agents to take credit for their good performance. If consumers are pleased with their performance so be it. It may just mean that agents actually do perform well for their clients and they create loyal fans. I don't know why that is a bad thing. Just take credit for your good customer service and let the strong reviews rack up. I wrote a white paper about this a couple of months ago called Reputation Marketing – Your secret weapon. http://wavgroup.com/Home/Reports/Reports/YourRe…. The paper talks about how to turn the great customer service you provide into a marketing tool. Why not take credit for the great service you provide?

  9. Rob – Agreed that ratings are here to stay. I seem to recall whispering in your ear at NAR that social media will take care of that — people share their experiences online and will continue to do so. That really was my point. We don't need to obsess over creating a rating system; it exists in the social space and will evolve and populate quite naturally with no help from us.

    And, as a point of clarification, I don't fear analytic rating systems. I only question their validity.

    You hit on an interesting point with Redfin, and it is true. I send out a report card after every closing, and only the most passionate will respond. The non-responses are usually from the folks who were less than thrilled — nonplussed — for whatever reason. I suspect that the ones who aren't comfortable smacking me personally will be the ones most likely to tell their friends on Facebook.

    One thought regarding peer ratings. You have forgotten that the majority of agents want only to through their “peers” under a bus. It is that competitive. I wish I thought that most would grade their colleagues honestly and fairly, but that is not the world in which we operate.

  10. Sticking to the consumer side of an “objective” rating system, one question that's always burned me – how do the vast majority of agents get a fair (statistical) shake? Assuming consumers are voluntarily asked to weigh in, what can the expected response rate be? 10%? Even if you did better than that, since the average agent closes 7 transaction sides per year very few will have more than 1 or 2 ratings to their name. You'd need a lot more than that to have any kind of statistical relevance. Although the agent with far more transactions is more likely than not to be a “better” agent, it still strikes me as an unfair nut for all but the uppermost tier to crack.

    Any thoughts on how to overcome this?

  11. For those unwilling/unable to read the entire post, simply zero in on Rob’s one sentence plea: “[p]rovide me the consumer with as much data and as much information as you can, and let me make the decision.” I suspect he would agree this statement probably merits some qualification, but its essence, I believe, is what drives the post.

    Rob, I note that each component of your admittedly “imperfect” solution requires significant if not exclusive MLS and/or brokerage oversight/participation. For what it’s worth, I have mixed feelings about this. For example, I think MLS/brokerage involvement is what allows consumers to have confidence in the agent reviews published by HAR, Redfin, and ZipRealty (I’m not sure, but perhaps the same can be said of QSC, NPS, and Diverse Solution’s recent creation). Consumers know that real former clients are behind the published feedback. But there seems to be something unseemly about putting groups of Realtors® in charge of their own rating systems (I suppose this gets at the interested/disinterested predicament you discuss above).

    I admire your comprehensive approach. Thanks for sharing it.

    Michael Erdman
    President & Founder
    http://www.AgentsCompared.com

  12. Rob: Nice post, as usual. I'd just add that we already have a rating system, called the free market. Agents who survive the usual cut-offs, like 12 months or less, are “rated” by their customers who continue to pay them, refer them and repeat-use them in the future. As for customers rating agents on some sort of metric system, we also have that, too: It's called the referral system, and it works for hairdressers, doctors, painters, plumbers and other service professionals. (Note that in MA, where I live, hairdressers must take a 1600 hour license course, whereas real estate agents only need to take a 40-hour one, so the analogy is really more than apt…)

    I also think that with the rise of social networks – AND the fact that Gen Y has already been observed to chose “referral via social network” over “searching the web for ratings” I think we have the best system, organically designed by the consumer, and backed up by the only rating that counts: Their dollar (spent or withheld).

    Keep up the great postings!

  13. I'm surprised that there wasn't one mention of LinkedIn in your whole post. Isn't that at least part of the point of LinkedIn?

Comments are closed.