value, prices & data / theory, flaws & spin in Manhattan apartment prices
bothered by smart guys
The David Leonhardt Economix column in last week’s NY Times business section, What Statistics On Home Sales Aren’t Saying, has been bothering me. I blogged about it last week, with a link to the Matrix helpful discussion about the sources and flaws in the various bits of market data that is reported, Home Prices: To Tell The Truth, The Whole Truth And Nothing But the Truth (Sort Of).
predicting the present is not easy
That Economix brought to mind a bit of insight about economists: how can we expect them to predict the future when they have so much trouble predicting the present? (Source unknown, but I read a variation on this recently [in the NY Times??].)
Leonhardt and Miller both address the difficulty in applying the data available about home prices to the immediate question “where is the real estate market right now?”
Seems to me that Leonhardt took two truisms (all data is limited + all data requires some interpretation to be useful) and turned them into something more nefarious (all data is bad + all interpretation is worthless spin).
The hook that he used to set up his piece and his analysis — auction data from Naples, Florida – is a particularly strange choice for a discussion of ‘market’ data.
Let’s go back to Econ 101 and the definition of Fair Market Value:
- start with an informed seller who actually intends to sell
- add an informed and willing buyer
- provide enough time for reasonable exposure to the relevant market for buyers
- and – the kicker for this discussion – assume that neither party is “compelled” to act (i.e., that neither buyer nor seller faces “undue pressure”)
I understand that in the real world lots of homes are sold by people who have to sell at that moment and that this results in a low comp being out there. I suspect that at least some of the prices reached at that Saturday morning at “the Naples Beach Hotel and Golf Club, [when] a few dozen houses went on the block in front of about 500 bidders” would not meet this definition of Fair Market Value.
In a down market (as everyone but OFHEO seems to agree is happening in Naples, based on Leonhardt’s sources), I would think that the auction process is best suited for people who have to sell at that moment. They have either tried to go the traditional route through a real estate agent for a while without success, or they have decided they cannot wait for that process to occur.
do auctions qualify as ‘fair’ in that way?
So I don’t know that it is fair and balanced to start an analysis of real estate data compared to what is really going on in the market with such a flawed fact set.
There is a wealth of data out there about gross market conditions and direction (no pun intended). All of it is limited (and – therefore – perhaps flawed) but not necessarily bad. Take the various data sets comparing some category of recent sales to a similar category of past sales.
Leonhardt (who is otherwise a data-guy) offers a peculiarly damning criticism: “the statistics have a number of flaws, perhaps the biggest being that they are based only on homes that have actually sold”. He goes on to say that unsold inventory can be a useful thing to know in assessing the market. While I agree that “unsolds” are (a) interesting and (b) relevant, they are very hard to measure effectively. Indeed, they are impossible to measure by counting and measuring “solds”. Does that mean we should not count “solds”? Of course not. So why does Leonhardt lob this criticism at that data?
if that is the point, were is the data?
To continue to pick on Leonhardt for a minute, he ends with a great newspaper conclusion: “We may now be living on both borrowed money and borrowed time”. He gets there by talking about the seriously troubling “fact” that “growing numbers of these families are falling behind on their mortgage payments, and they won’t be able to bail themselves out by refinancing or selling their homes”. This “fact” seems to be true (it is consistent with lots of reports I have seen), but he does not spend any time proving it.
But that is a very different point than saying the data overlook a problem, then ‘proving’ it by talking about auction sales.
If growing numbers of people are falling behind on mortgage payments and their homes are no longer worth what they paid for them, and (therefore) growing numbers of people will begin to default because they cannot refinance their mortgages that will be a big problem.
But if that is the point, talk about negative equity (not just equity declines) and at least mention that standard mortgage rates have hardly increased year-over-year.
If you are going to talk about a housing mortgage crisis, find the people who (1) cannot carry their mortgage, (2) cannot refinance into a better rate, and (3) have little or no equity. Because someone in a home that has declined to 90% of its purchase price will not have a problem carrying their mortgage unless the rate re-sets to an uncomfortable level or their personal income suffers materially. That group may be a “growing number” but Leonhardt has not established that. (I am not saying it is not true, I am just throwing data darts at a data guy.)
serving fudge every three or four years?
One final numbers gripe at Leonhardt before moving on to Miller (I will be less cranky there). Leonhardt’s dramatic closing line quoted above (“We may now be living on both borrowed money and borrowed time”) is preceded by an ominous reference to ticking clocks: “[o]ver the last few decades, the world’s financial system has endured a crisis roughly once every three or four years”.
The problem with this drama is that it tastes great but is less filling: the support for “every three or four years” over “the last few decades” is awfully vague about timing. He cites “the stock market crash of 1987, the Asian and Mexican meltdowns in the 1990s, the dot-com implosion of 2000 and, most recently, the aftermath of Sept. 11, 2001”. When did the Asian and Mexican market problems happen? Was it three years after 1987 and then three years after that? When he says “the aftermath of Sept 11, 2001” is he talking about immediately (which would be the one year after the dot-com implosion of 2000) or did that aftermath happen in 2003? If it happened in 2003, then we in 2006 might have some bad clocks ticking, but then maybe he is fudging his numbers….
At an elemental level, Leonhardt is bothered because the stats he see do not mesh with the credible opinions he comes across (his four or five talking heads) and some very specific data points (such as they Naples, Florida auction results).
Miller steps back
Miller looks at the same mess of data and offers more insight (your mileage may vary):
“Its very difficult for most consumers, government officials, academia and real estate professionals to get a real world gauge on how a real estate market is actually doing. Tried and true methods all seem to have some sort of flaw and when a market is in transition, the changes become even more pronounced. And then throw in the source of the information, with the presence of spin, makes the effort even more daunting. Those covering the market, whether it be Big Media and the blogosphere tend to gravitate towards whatever is released that day.”
That is a pretty straightforward analysis of why it is hard to get good (comprehensive) data and why The Talk focuses on the most recent umbers – whatever they happened to be.
The two data camps that Miller sees are index-based and price reports. Both have problems.
You’ve got producers of indexes telling you that prices are less meaningful, yet users of the indexes often view them as a “black box” and don’t grasp how the information was calculated (do we hear “seasonally adjusted?”) Indexes tend to be created for macro markets because the data set needs to be large. Cycnicism has been a detriment to reliance on indexes.
It takes an academic like Shiller with a big data set, lots of processing power, and the confidence to publish the results of a logarithm and expect to be accepted.
But reliance on published price data is also fraught:
Those that rely on housing prices tout that they are the real thing yet most resources for housing prices tend to be non-economist types, trade groups and real estate firms, because they tend to be easier to generate and report than an index. There are a growing number of market studies put out in the public domain by local real estate brokers and agents (and of course, appraisers) to try to bridge the gap between the national stats and local markets. However these reports are often limited by the size of the data, limited understanding of what the data really means and are clouded by their intentions.
…and nails a 3
Miller hits an important nail here. Anyone trying to offer useful insight about The Real Estate Market needs to use some data as a base. The “best” data (meaning, deepest and historical) is usually national data.
Everyone repeat after me: “all real estate is local”. So useful commentators must apply national data to each unique market. This is not easy, even out there in America, where (I imagine) local data are much better (more complete, more historical) than Manhattan data.
Any discussion about local markets will inevitably break down to opinions about what local trends match (perceived) national trends and what local trends diverge from national trends and – of course – why.
This has gone on too long for a blog. So I will stop after one more paragraphs.
this is way too long, so…
In the current environment, I hold these truths to be self-evident (if they do not seem so to you, write your own commentary!):
- each data set is limited to what it is (a Rumsfeldian doctrine)
- more data are better than less data
- applying national data to any local market is never simple (especially one as ‘special’ as Manhattan)
- in order to be useful, data require principled and honest commentary (sometimes known as spin), but too much spin without data is propaganda while too much data without commentary is indigestible regurgitation
(I may regret this in the morning.)
© Sandy Mattingly 2006