RTN101: An Introduction to Network Corrected Real-Time GPS/GNSS (Part 2)

A 1.070Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

This is the second part of an introduction on Network Corrected RealTime GPS that began in the September 2006 issue (if you have not read it, please do so; it is posted on The American Surveyor website). Many of the following sub-topics will be covered in greater detail in subsequent issues (most by experts in their respective fields).

Space Weather The "I" in the Sky
The ionosphere is not some layer of upper atmosphere made up by environmentalists (or GPS sales folks). It is a big thick belt of charged particles swirling and undulating far above our planet. More than any other factor, the big "I" stands in the way of perfect satellite-based positioning.

The ionosphere, along with its lesser partner the troposphere (the layer of atmosphere that holds the weather and our breathable air), causes delays in the passage of signals from the satellites to the reference stations and your receiver. The ionosphere generally makes up 90 percent of the error, and the troposphere makes up about 10 percent.

The best we can hope for is to be able to reconcile the delays for each signal we use in as near real-time as possible, taking into account our exact observation location and the conditions of the iono and tropo layers precisely between the receiver and satellite. Short of setting up a laser and an atomic clock, modeling is the answer.

A dual-frequency receiver can model out 99 percent of these delays so they can be taken into account in subsequent positional computations. This is one of the major reasons that a receiver is "cool" just by the fact of having a second frequency. Now if one can compare the modeling of one receiver to that of another, one could model the theoretical effects of such delays radially about the stationary receiver. But in effect this is a kind of one-dimensional model in a linear manner along the vector between the base and rover. There is a limit to the distance the rover can function effectively from a base, a short tether in comparison to that of an RTN. A single base range is generally 10km (though there are efforts to improve this with mixed success), but networks function well with spacing of 50km.

Comparing the modeling of a network of stations around your location would develop a sort of two-dimensional model of the effects caused by the delays. As you know where you are with respects to the network, the RTN is capable of providing you with a highly analyzed and computed set of corrections, custom made for your specific location within the network.

While some of the sophisticated approaches to modeling and the various software solutions currently available (e.g., FKP, VRS, RTCM3.0-Network, MAC, and others that will be explained further in subsequent articles) may have significant differences in their approach, the results for the user and practicalities for use may seem negligible. And some networks provide more than one solution (and various flavors of each) all within a single software suite.

But, even the best RTN cannot overcome extreme conditions with respects to space weather. As space-based commerce becomes an economic player, so has the interest in space weather, and in particular the anomalies. There is an entire cottage industry of space weather analysis and reporting services both public and commercial. We are probably a far cry from an hourly "TEC (Total Electron Content) Report" on cable news, but the internet has many resources to keep yourself apprised of live and predicted events.

At any rate, the average surveyor is now having to become more intimately familiar with the nature of space weather, how to interpret the reports (not just those e-mail warnings and rumors about sun spots), and how the RTN can help you deal with the effects.

Satellites Ours, Theirs, and the Others
The GPS (or Navstar) constellation has to be one of the biggest success stories in terms of government initiatives and their reliability. These satellites maintain health and availability percentiles in the high nineties. They are so successful and reliable that this has upset the schedule to launch new ones (but let’s not get into that now). And so successful that other parties have sought to emulate them.

GLONASS used to get a bad rap. No, it was not a marketing thing by the manufacturers; it was that only five years ago the system was of such questionable reliability that manufacturers were hesitant to tell their customers to use them for fear that the results might be blamed on their gear. But now with a proven commitment to the program that includes funds from other countries, and an aggressive launch schedule, GLONASS is reaching new heights in functionality. There is a not-so-small matter of a different reference framework, timing, and other factors. In a properly functioning system one would hope that for most uses the differences would seem negligible for the end user. This should all improve in the very near future.

Galileo. It is interesting to see that all of the manufacturers are touting "Galileo-ready" gear. Ahem. I wish they would let the Europeans know what the secret is because they are still trying to define their own system. While this is an oversimplification of a very complex subject, what the manufacturers mean is that they are leaving "placeholders" for the system. The first few satellites are up and pumping some kind of signals, the final design and signal structure is not completely settled; radio-spectrum allocation may not change, but the information carried may. At any rate, more satellites will be a good thing.

A new constellation from China, an innovative pseudo-orbit initiative from Japan, and certainly more to come means that there will be lots of satellites in place and on the way, new frequencies, and healthy competition between the manufacturers of our gear so that we may take advantage of them.

Rovers L1, L2, Old and New
If you have a receiver that can take advantage of correction data (be that RTK or other broadcast) then here is good news! It is likely that (apart from a few frustrating exceptions with some legacy gear) you can take advantage of real-time corrections.

This isn’t just for dual-frequency receivers. Many code-only receivers are capable of utilizing corrections from beacons, NDGPS, WAAS, and others. While expected results from dual-frequency gear via RTN can be measured in centimeters, results for single gear can regularly go subfoot, or a few decimeters. The repercussions of this may seem a bit frightening, as surveyors seemed to be able to keep the higher precision stuff out of the hands of non-surveyors. For better, for worse, higher precision was bound to hit the consumer level eventually anyhow; all the more reason to keep up with this stuff.

In Europe, where RTN first got implemented more than six years ago, the challenge of how to enable as many folks as possible to receive the corrections in a non-proprietary manner, and without compromising electronic data security has been well thought out. What we are talking about is a type of real-time correction (e.g., RTCM, CMR, etc), a manageably small stream of data that many were used to getting from base radios or wider area beacons. In that scenario (which was only a few years ago for most) how best to deliver the corrections over the wide geographic area without a tremendous number of radios, repeaters, or old dial-up modems? Where there is a will there is a way; and "mobile data" was one of the biggest factors driving the wave of RTN development.

Mobile data is being revolutionized on a parallel track with the rise of RTN. The Internet means that digital data can reach an entire wide area instantaneously, and then is just a matter of getting the data via the web to the rover. The most common method is cellular (in areas with good coverage, which just happened to be coincident with the areas that RTN first grew), but this did not necessarily mean dialing up a one-one session from a field cell phone to the source of corrections, it meant using the cell phone (or modem) purely as a dumb modem to connect your data collection device to the Internet.

Once connected to the Internet, the corrections are made available as streaming data as a list of sources of standard and custom corrections. Like a sort of streaming radio broadcast service, the user connects to a source and passes that to the rover just as if it was a radio broadcast from a base or beacon. As an exercise, go to your data collector and set it up for RTK, then choose the options for base radio, and (if you firmware is reasonably up to date) you will see an option for Internet as a source (pretty much regardless of manufacturer).

To facilitate this model for Internet broadcast of corrections, an international body has adopted a standard protocol for such transmissions: NTRIP (Network Transport of RTCM via Internet Protocol). Free, public domain clients are available (and implemented by most manufacturers for their newer gear) to allow a user to access a network and authenticate (if required). For those with older gear there is yet another cottage industry of folks finding (completely above board) ways to get data to the older gear using the public domain version of the NTRIP client. That "inner geek" thing again.

Operations RTN Do Not Run Themselves
Though some of the network software suites have been around for many years and are improved upon constantly (i.e., offered by subscription so that network hosts may take advantage of every new feature), and many of the functions are automated, someone still needs to operate the central processing center (CPC) of an RTN.

Much of what the network can learn about itself – coordinate monitoring, space weather modeling, geometric integrity, accounting, usage, data file generation for post-processing, and other aspects of reference station status – is gathered into databases and displayed in real-time on the respective RTN websites.

These RTN software suites are (for the most part) fully matured, and the included out-of-the-box web applications provide nearly everything the user needs to check system status in a completely automated manner. There are some networks with continuously operating rovers that feed rover integrity applications reported live to users via the web. Some new rover software packages are now including live network status data.

With all of this automation, why do we still need system administrators? Because nothing is perfect, even if it is nearly so. And because folks are not usually completely comfortable working in a network environment (at least not for a few months) without some level of "help desk" support… While help desk inquiries should theoretically be limited to system status and access issues not otherwise addressed on the web page, RTN administrators frequently find themselves explaining the fundamentals of RTN over and over again to new users because there is a lack of ready resources on the subject at this time (so now you see my motive for writing this: to reduce support calls received!).

There are a lot of "moving parts" in the use of an RTN that may be new to the user, and the user may choose to blame the RTN as a whole for their woes (be they communications, their own equipment, training). Network operators may be all too familiar with the following analogy: back in the days of the "Rural Electrification Program" when some folks got their first hook-up to AC power, they might call the power company to ask how to use their toaster. I give surveyors and their can-do attitude more credit than that, and fortunately over time the calls decrease.

An RTN can only be as good as its configuration. While there is a lot of automated monitoring, someone needs to monitor the monitoring, and keep track of settings, upgrades, accounting, and frantic users.

Troubleshooting What Trouble?
While the reliability and availability of an RTN is directly related to the quality of its administration and configuration, in general these percentiles are in the high nineties. That is not to say that nothing goes wrong. No matter how reliable a network is (and there are plenty of third-party commercial, governmental, and academic parties that can monitor network availability for you) the user still needs assurance that the network is up and running, and that the problem is often their own equipment or communications. The fundamental question the user wants answered is: "is it me or the network." It is usually not the network, but it may not necessarily be the user either.

There are other factors that can, purely on the rover end, cause a breakdown in the execution of a network-corrected session: cell-hell, bad setting, bad local conditions, not enough satellites, bad space-weather, cables, hardware, firmware, software, wetware (user brain), and other assorted gremlins. There is an art to troubleshooting and some commonly used tips on how to test each.

A great goal for a network is to be able to operate at two condition levels: full-speed or stop. Either the field conditions are conducive to full RTN use or the user should be able to size up the situation quickly and go to plan "B" (terrestrial or post-process). In evaluating the "stop" condition the user should consider the factor of "I just can’t get this thing to work today." If you have sky, birds, and are within the network, you should be at "full thrusters" an amazingly high percentage of the time.

I have been tracking the usage statistics of our own crews for more than four years, and while they might complain a lot about some glaring outages, mostly due to my own bumbling, the numbers show a lot of cost-reducing hours of good RTN time. When you can use an RTN, the potential savings are tremendous, on the rare occasion you can’t, then have an alternate plan handy. In days of old, when someone would forget a battery and a whole traverse crew was idled, those "outages" would have represented much higher costs to your project.

Communications or Lack Thereof
Digital communications, and more specifically the Internet and mobile data, are the magic that really makes this all possible. There are two key facets – communications between the reference stations and the network, and communications between the network and the user. Each can present unique challenges, but the good news is that new options become available frequently and we can ride the wave that the general consumers are driving.

Innovation Tapping the Inner Geek
Have you ever seen those websites where folks explain how to do some really wild stuff with everyday objects (like Pop-Tarts blowtorches and candy-soda rockets)? In a similar vane, folks are doing some cool stuff with their own GPS gear. With a few visits to the hardware store, these innovators have gotten in touch with their "inner geek" to take RTN to new heights. Examples: how folks are dealing with "cellular-challenged" areas; what other things they are suddenly able do that would have previously been cost-prohibitive; how folks can leverage the improvements offered by an RTN to single-frequency work to "infect GIS with accuracy." These and other work-arounds will be covered for some common RTN "points of pain."

Future Everything Better Except the Coffee
While some things in surveying will never change (okay, some folks have tried to invent self-pounding hubs), some things actually will change. While RTN will become just another tool-in-the-truck to determine relative positioning, there are some lateral changes that will affect us possibly as much: we will have to become much more in tune with geodesy; the door will open to increased utilization of mobile data; monument preservation programs may well finally become affordable and practical; the dialogue contrasting "surveying vs. mapping" may focus more on context than methods; and a more open-source trend for solutions may develop. On these and other topics, we’ll ask a few of the prominent folks in the RTN field to comment and speculate. Looking forward to much fun. See you in the next issue!

Here is how your feedback on this "seminar" series would be most useful: email the editor and let us know of any other facets of this subject not already outlined, and/or if you know of an expert in a particular field that might be talked into contributing an article. Stay tuned…

Gavin Schrock is a surveyor in Washington State where he is the administrator of the regional cooperative real-time network, the Washington State Reference Network. He has been in surveying and mapping for more than 25 years and is a regular contributor to this publication.

A 1.070Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

About the Author

Gavin Schrock, LS

Gavin Schrock is a surveyor and GIS Analyst for Seattle Public Utilities, where he focuses on using digital data to improve the cost ratios for engineering projects. He has worked in surveying, mapping, and GIS for 23 years in the civil, utility, and mapping disciplines. He has published in these fields and has taught surveying, GIS, and data management at local, state, national, and international conferences. Contact Gavin Article List Below