A 89Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
One of the disadvantages (some might call it a benefit) of arguing a topic in successive issues of a magazine is that to get the full flavor of the dialog one must keep a fistful of issues at hand, in order to understand the context of the current conversation. For that, I apologize, because that is the situation in which we find ourselves as we revisit the ACSM Positional Accuracy Standards. To his credit, Gary Kent has assumed the mantle of defender/ advocate of the Standards in general, and these specifications in particular. As said defender, he took a dim view (November issue) of my observations on the subject (July/August issue). I have assumed no mantle other than that of interested surveyor and scribbler, albeit an experienced one. So, let’s roll.
The Inane and Silly
Why would anyone want me to certify that the "Relative Positional Accuracy of the survey does not exceed that which is specified [in the standards]"? I’m to certify that my work is less accurate than what is specified? Now, if one used the old term "Positional Uncertainty," that language would at least make some sense, but as it is currently, if your work measures up–at least in ACSM’s opinion–you can’t use the prescribed certificate. One cannot substitute a term like "uncertainty," which has a negative connotation, with a term like "accuracy," which has a positive connotation, in a body of text without also changing some of the surrounding language, so as to keep the meaning from reversing itself. (Compare the 2005 version with the 1999 version.) I’m guessing this is a goof, but how in the world did it make it past all those eyeballs? Your move, Gary.
Now, on to the more interesting part of the discussion. I suggested in July that no one outside of a few surveyors cared a whit about positional accuracy. I should have clarified that I meant the tiresome focus on positional accuracy specifications and our compliance or noncompliance with them. No one cares. Gary somehow ties this in with the problem of pin-cushion corners, but that seems to me an oversimplification. Multiple monuments at a common corner may have a common measurement disagreement to blame, but it just as easily might be due to other evidence considerations. I’m a little weary of hearing pin cushion corners always being the fault of we surveyors who "don’t understand measurement theory, and thus refuse to adopt anyone else’s monument, even if it’s only 0.02 feet from where it should be." That simply isn’t so.
But don’t take my word for it–error theory isn’t one of the Rules of Construction. (There could be a whole essay unpacking that, so I won’t explore it further here.)
The Water Valve
From the standards themselves: "Relative Positional Accuracy means the value expressed in feet or meters that represents the uncertainty due to random errors in measurements in the location of any point on a survey relative to any other point on the same survey at the 95 percent confidence level." Hmmm. "Any point on a survey relative to any other point on the same survey." Gary indicates that this specification only applies to control points, the boundary and buildings, because of the title of the section in which the standard is specified: "Allowable Relative Positional Accuracy for Measurements Controlling Land Boundaries on ALTA/ACSM Land Title Surveys." Oops. One cannot claim relative accuracies between two points without knowing, to the accuracy claimed, the positions of both of the points under consideration. It is axiomatic. So, even if the section applies only to what Gary maintains it does, unless that water valve in the middle of the site is not a "point on the same survey," to which I would emphatically disagree, its position must be known to within the accuracy required in order for the control or boundaries to comply.
The culprit, leaving aside the concept itself, for a moment, seems to be the language "to any other point on the same survey." Older readers perhaps will remember why that language is there, and to which Gary alluded. When the modern push for these sorts of accuracy standards surfaced back in the mid `90s, some of us pointed out that naked tolerance statements would be interpreted as the "real world" accuracy of the property corner itself, even though it ignored non-mathematical aspects (er, most) of the retracement. Hence the convoluted three paragraphs which precede the standards. Those are the direct response to the negative "press" that Positional Tolerance got, and they are entirely correct. Since, as Gary pointed out, the non-mathematical parts of retracement far outweigh most measurement effects on uncertainty, why don’t the standards apply to the control and nothing else? That was the case for many years; how did that fail our public? Where are any real world consequences? In the entire nation, can we name three?
Whose Standards?
Standards developed by national organizations are presumed by third parties to be the result of widespread discussion and consensus among the profession. Clearly, these Accuracy Standards are not the result of that. Whose consensus do they represent?
Gary indicates that the source of the present standard was the NSPS Model Standards developed by the NSPS Standards Committee some years ago. My records indicate that is true, but there is more to the story. 1997 correspondence of the NSPS Technical Standards Committee discussed the proposed standards–prepared by none other than Gary himself. As I review what was circulated at the time, it is clear to me that it is the direct precursor to what we discuss now (with the exception, that is, of the different classes of survey). My intent here is not to run anyone up the pole, but let’s not point fingers at nameless, faceless, perhaps dead, former participants, when, in fact, people at the center of the issue now were involved at its inception.
It hasn’t escaped me that the method of foisting this on the title surveying community, after the initial uproar, was the same one used by unscrupulous politicians: introduce the alternative as an option at first, then after some years remove the original method. You know, when the not-so-bright electorate will have forgotten.
We didn’t forget.
Whose standards are these anyway?
Joel Leininger is a principal of S.J. Martenet & Co. in Baltimore and Associate Editor of the magazine.
A 89Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE