By Dan Berger

Judging wine isn’t a science, though those who publish numbers for wine rankings imply it is equivalent to the grade placed on a math test.

I’ve written before of the difficulty of determining how a 2007 Chevalier-Montrachet that is blessed with a score of 93 compares with a 91 awarded to a 2010 Corton-Charlemagne. I suppose the 93 is seven points away from perfection, but what is perfection? Without a definition of what each score represents, we are left with only a vague idea of what the score means.

Lacking an understanding of this “system,” I decline to use numbers in writing about wine, in part because my tasting venues vary so greatly, thus making the context as much a part of the evaluation experience as is the wine. (The example I always use: compare a great Burgundy served in crystal at a Paris three-star with the same wine poured into a plastic cup at a high school football game.)

Yet I must admit that numbers make a certain amount of sense in scoring a homogeneous grouping of wines in a blind tasting.

I rate wines almost daily; roughly 20 or 30 times a year I rank them in a double-blind tasting, where some form of numerical score-keeping is essential.

It’s all well and good to pull corks on five candidates for a Wine of the Week an hour before deadline and let a gut reaction determine which wine gets the nod. It’s quite another to face 39 Rieslings at a wine competition and, as part of a panel, be expected to award gold, silver and bronze medals.

And it’s even trickier to sit down to score 35 Pinot Noirs staged by a marketing group. The differences between one event and another are significant.

For one thing, wine judgings in which there are five or six judges on a panel are unwieldy since opinions on what constitutes a quality wine can vary greatly, and for differing reasons. And when the opinions vary greatly among six judges, often the coordinator of the event simply adds up the scores of the judges and divides by the number of judges.

This occasionally rewards mediocrity, leaving distinctive (and potentially great) wines with lesser medals. The more judges you have in a judging, the more mediocre wines have a chance to score well.

Moreover, weak judges tend to vote extremely conservatively, usually awarding no gold medals at all, which tends to drag down the total medal count, with more bronzes than are appropriate and fewer golds. This is true not only in Chardonnay and Cabernet, but also with lesser varietals. Weak judges defend themselves with, “Well, it’s only Chenin Blanc…”

For my internal rankings during double-blind evaluations, I use a modified form of the UC Davis 20-point scoring system. I use points only as a vague measure of relative quality, wine to wine, and not as a hard-and-fast, etched-in-stone number.

My numerical score is adjustable after I see what the wine is and how it is priced. (Since I’ll never publish these numbers, I don’t have to explain them to anyone. They are for my use only.)

Here is how I use the scale:

Anything from 1 through 10 is badly flawed wine, not commercial; 11 and 12 are flawed wines and I perceive that the wine maker had control over the problem and didn’t solve it; 13 is reserved for wines I judge to be from a wine maker who is unclear on the concept, or who had poor grapes and/or a bad technique; 14 is an acceptable wine, though one I wouldn’t recommend because it is not good enough. It is for drinkable but not memorable wine.

Any wine scoring 15 or more on my evaluation sheet is good enough to recommend: 15 is for wines I would drink, though only with an understanding that the wine was aimed at a specific sort of job (Chianti with pasta, etc.), and was priced right.

A wine scoring 15.5, the score Australians use for a bronze medal, is for a worthy wine, one I’d be happy to recommend, but only if the price were fair. This wine must have some varietal character.

Starting at 16 we get to truly excellent wine, exemplary examples of the breed, with definable varietal character. This is a very good wine.

A score of 17, which would get a silver medal in the Aussie system, I use for excellent wines, those I’d buy if the price were reasonable; 17.5 to 18 is for wines that are eye-opening, wines about which I’d have only minor qualms.

At 18.5, the spot where the Aussies award gold medals, is a wine of near perfection, one I’d buy at almost any price. And 19 and 20 are reserved for wines the offer utterly indefinable greatness, far surpassing the bulk of the breed.

I like using numbers when scoring homogeneous groupings of wine because at the end I can look at the number of wines I scored highest and see how that number compares with, say, a comparable group of like wines.

But as for publishing them, I decline. The scores are only for my personal reference and allow me to relate one wine to another in the homogeneous grouping. Otherwise, scores on wines tasted in different settings are meaningless to others.