Introduction
The impetus for this research topic is concerned with a data structure that efficiently stores and queries scatter point data. These points may originally be neatly arranged on a grid, or randomly scattered across the surface of a sphere (e.g. the earth), but it makes no assumption in that regard. The data structure could have member functions such as add(point) or add(points, amount). The points themselves contain three scalar values: the longitude, the latitude, and the data value itself. This data value could be geographical data such as land height or weather data such as air temperature, but again, the data structure is agnostic about the data itself other than as a scalar value.
Data Structure
The purpose of the data structure is ultimately to efficiently and accurately query for point(s) that is(are) nearby another point, the reason being to compute the average value at that point. Suppose a requirement is to create an image representing the air temperature (or rainfall, cloud cover, whatever the scalar value of each point represents) - this data structure could be queried for each point (i.e. pixel) in the image. Once nearby points are queried, the algorithm to average the scalar values of each point is outside the scope of this, but a good candidate is the inverse distance weighed algorithm.
Within the data structure, the data points will not be stored in spherical coordinates, but as n-vectors. The reason for this is that the sampling algorithm - finding N closest data points - is more efficient and accurate using vector algebra. While it is outside the scope of this, to summarize: it is easier to determine how close two points are on a sphere if these points are represented by unit vectors than by latitude and longitude. For example, two points sharing the same longitude are farther apart at the equator than above or below the equator. Even if we are just concerned with relative distances, and do not need to compute the square root of the distance formula, for this reason it is still not as accurate. Also the international date line, where longitude "resets", presents a problem that would have to be dealt with. Also either pole. But you get the idea - vectors are more stable, accurate, and efficient.
I've always been a fan of Binary Space Partitions - each branch node of a BSP has two areas, one below and one above (or one left and one right, or one in and one out, however you like to think of it). To go along with storing points as vectors, branch nodes within the structure partitions their space also using a vector, which in this case forms the normal of a plane that bisects the unit sphere. All data points in a node (or children of the node) will either be above or below this plane (points that are on the plane can be allocated to either side as is appropriate). Points that are below the plane are placed in the first child of the node, and points that are above placed in the second child. We call this plane normal the split axis of the node.
Querying the structure for the closest N points then becomes trivial. For any branch node, compute the dot product of the point in question and the split axis of the node. If it is negative (the point is below the split plane), recursively query with the first child node, and if positive (the point is above the split plane), with the second child node. For a leaf node, compute the dot product of the point in question with each data point contained in the node, keeping a sorted list of the closest N points. The one caveat is that in branch nodes, after recursing into one child node, it may be necessary to recurse into the other child if the farthest point found so far is farther than the other child node, since there may be closer points in the other child node. But this is trivial as we are comparing dot products. No expensive computations are necessary to compute the N closest points - just a bunch of dot products. As dot products of unit vectors range from -1 to 1 (-1 being farthest apart and 1 being equal), two points are closer if their dot product is higher.
Once complete, and the list of N points found, the actual distances can be calculated if necessary, which in this case is done by calculating the angles using the already-calculated dot products. This angle can then be multiplied by the radius of the earth to get an exact distance, again only if necessary. But for averaging, these extra calculations are not necessary.
As a final note, the data structure lends itself well to graphics hardware. In my particular case, rendering an image using such a data structure on the CPU may take several minutes, but on the GPU takes a fraction of a second.
Problem
The problem - common to any space partitioning tree - is how to initially create the data structure. Again, the points are not assumed to be arranged in any specific way, and as far as we are concerned, are a "point soup". They can be specified one at a time - addPoint(point) - or all at once - addPoints(points) - or both. Specifically, how can the determination of the split axis of any branch be efficient and provide an even split (the same or similar number of points on either side). The problem is unique here because the points are not arranged on a two-dimensional surface or in three-dimensional space, but lie on a unit sphere.
My current solution to the problem is not terribly elegant. For each two data points, compute the axis that splits them evenly. This is done by computing the point between the two (subtract one from the other) and normalizing it, then crossing this with the cross product of the two points. This cross product comprises the normal of the plane that evenly splits the two points. This normal is then tested against each other point in the node to get 1) the number of points on either side of the plane and 2) the distances of the points to the plane. A good split plane is one that 1) splits points evenly, 2) has the same/similar distances on other side, and 3) has large distances on other side. As this test is performed for each two data points, some big-O notation shows that determination of the split plane for nodes containing a large number of points will become prohibitive. However, I have not been able to determine a better solution.
But the advantages of such a data structure still outweigh this expense. In my particular case, the time spent initially creating the tree is worth the time saved during queries. I should mention that if the data points are known ahead of time, it is faster to specify them all at once, so the tree can re-build itself once, rather than one at a time which may cause the tree to re-build itself multiple times.
I'm guessing this is just for a general scheme for nearest neighbour search on a sphere, but the kind of questions I would be asking are:
As well as spatial partitioning schemes, you may also be able to simply store a list of nearest neighbours on each point (you could prestore the angular 'distance' too). Then you search becomes a case of find the nearest point, and recurse outward (with a bitmask for points already visited) until you have found all the nearest points of interest.
Another data structure which might be useful is a 2d grid of all the point combinations, with the angular distance between each point pre-calculated. Also consider that knowing the exact distance to neighbours may not be as important as finding the 'rough' neighbourhood, depending on the application, so you may be able to avoid most of those calculations.