This is a discussion of problems with Joe Mehaffey's averaging analysis... First of all there is the graph (which he didn't draw) which states "very short term averaging shows little improvement in accuracy due to correlation". Actually, the data tells a different story altogether, the real reason that graph looks like little gain is because a log has been taken of BOTH the time AND the RMS! The best way to see whether correlation is creating "little gain" is to look at the non-log derivative, something he has failed to provide (largely because everything on his page was provided by someone else, and the original graph provider has steadfastly refused to plot the derivative). Later on he states "Again, averaging for a period of a minute or a few minutes gives you very small improvements in accuracy", making statements about "a few minutes" when he only presents John Galvin's data for ONE minute! Not only that, but John has provided the "few minutes" data, but he refuses to put that on his web page. Probably because he is now in the unenvious position of having to claim that 13-78% (see below) is a "very small improvement in accuracy". reduction caused by 4 minutes averaging (correlation period): RMS: 16% Usage Scenario 1 (average error): 13% Usage Scenario 2 (average search area): 53% Usage Scenario 3 (maximum error): 30% Usage Scenario 4 (maximum search area): 78% The reason behind these actions is because Joe was one of the camp who were originally claiming that there was NO improvement from averaging data less than the correlation period "of 15 minutes". Furthermore, there's no mention of the different usage scenarios, except for a brief acknowledgement of the search area situation, and no mention of the cost factor, both of which need to be considered for practical waypoint averaging.