Is there anything like a Wine Critic Consensus?

A study of Washington and California wines, by the AMERICAN ASSOCIATION OF WINE ECONOMISTS, examines the degree of consensus in quality ratings of US among prominent wine publications.
"Ratings are an important source of information for both wine consumers and wine researchers. For the purpose of wine research, are ratings on the ubiquitous 100 point scale reliable, objective measures of quality? The value of expert judgment has been called into question by a number of studies, especially in the context of wine competitions and tasting events. Our study is part of a much smaller literature focusing on ratings by expert critics. We look at four publications: Wine Spectator (WS) and Wine Enthusiast (WE), which review a broad selection of the wine market, and Wine Advocate (WA) and International Wine Cellar (IWC), which are more selective and focus more on the high-end of the market.
We find a similar level of consensus, measured by the correlation coefficient, between some pairs of critics regarding wines from California and Washington as Ashton (2013) does for critics of Bordeaux wine.
However, among other pairs the correlation is much lower, suggesting almost no consensus. Consensus is not found to be related to the blinding policies (or lack thereof) of the critical publications. Our findings show that quality ratings have a substantial degree of objectivity to them.
We undertook a straightforward analysis of the degree of consensus among prominent critical publications in the U.S. The degree of consensus, as measured by the correlation coefficient of wine quality ratings, varied widely between pairs of critics and also by varietal.
Among these publications, Wine Enthusiast’s opinion diverges from the others the most.
Excluding Enthusiast, we conclude that the level of consensus in wine ratings by professional critics in the U.S. market is high, and similar to the levels Ashton (2013) found for consensus among critics of Bordeaux wine. Further, the level of consensus between each pair of Spectator, Advocate and IWC substantially exceeds the level of consensus between wine competition judges, as Ashton (2012) reports the mean correlation coefficient across many studies to be just 0.34 (from, inter alia, Brien, May and Mayo, 1987; Cicchetti, 2004a; Hodgson, 2009a; Ashton, 2011).
An optimistic and conventional explanation for the greater consensus between critics than judges is that it is due to more extensive experience evaluating and comparing wines. It is possible though that critics are influenced by knowledge of price, winemaker and possibly also the ratings of other critics, which could lead to greater similarity of ratings. However, we do not observe greater consensus among non-blind critics than with the more heavily blinded critic Spectator."
in http://www.wine-economics.org/aawe/wp-content/uploads/2014/07/AAWE_WP160.pdf