GCP Placement


Version 0.6
12th Apr, 2019

Based on our experiments, we design algorithms for GCP placement based on the coverage radius. In particular, we provide algorithms for:

Notation. We think of a region as a subset of the 2-D plane, with a positive area. We represent regions by caligraphic letters: A,X,\aoi, \X, etc. The area of a region is denoted by μ()\area(\cdot). The ball of radius R around a point p is represented by B(p,R).\ball(p, R).

1Coverage and Heatmaps

We address the simpler question of estimating the coverage of a given set of GCPs, G\gcps within an AOI A.\aoi. The region covered by a single point pAp \in \aoi upto radius RR is B(p,R).\ball(p, R). Thus, the total region covered by the GCPs upto radius R, gives the “heatmap” of the GCPs at radius R:

(1)H(G,A,R)=Δ(pGB(p,R))A.\heat(\gcps, \aoi, R) \defeq \paren{\bigcup_{p \in \gcps} \ball(p, R)} \bigcap \aoi.

This provides a ready-to-compute formula for generating heatmap vizualizations. To compute the coverage radius of a set of GCPs, we perform a binary-search using the above formula. As the coverage radius may be an arbitrary real number, we estimate it upto an error ϵ.\epsilon.

Algorithm 1.EstimateCoverage

Input. The AOI, A\aoi; a set of GCPs, G\gcps; and an error threshold ϵ.\epsilon.
Output. The coverage radius RR, upto error ±ϵ.\pm \epsilon.

  1. set l=0,h=2diam(A).l = 0, h = 2\operatorname{\mathsf{diam}}(\aoi).
  2. set m=(l+h)/2.m = (l+h)/2.
  3. while hl>ϵh - l > \epsilon:
    1. set A=AgGB(g,m).\aoi' = \aoi \setminus \bigcup_{g \in \gcps} \ball(g, m).
    2. if A\aoi' is empty: set h=mh = m, otherwise set l=m.l = m.
    3. set m=(l+h)/2.m = (l+h)/2.
  4. return m

2GCP Placement

Our GCP placement problem is a version of Geometric Set Cover. Determining the minimum GCPs (and their positions), is hard and thus we settle for an approximation algorithm. The algorithm begins with the initial AOI, denoted A\aoi, and iteratively places GCPs, while updating the remaining region of the AOI to be covered.$

The iterative procedure picks the position of the next GCP by randomly sampling from a distribution that prioritizes positions that cover maximum area.[1] The coverage of a point p on A\aoi, denoted X(p,A)\X(p, \aoi) is defined as:

(2)X(p,A)=ΔB(p,R)A.\X(p, \aoi) \defeq \ball(p, R) \cap \aoi.

The distribution, denoted D(A),\dist(\aoi), samples a point pAp \in \aoi proportional to the area of its coverage, denoted μ(X(p,A)).\area(\X(p, \aoi)).

We describe the placement algorithm assuming we can sample from the distribution D(A).\dist(\aoi). The sampling sub-routine uses a few algorithmic techniques and is described in the rest of the section.

Algorithm 2.GCP Placement

Input. The AOI, A\aoi; coverage radius RR; and an error threshold ϵ.\epsilon.
Output. Collection of GCPs, G.\gcps.

  1. set G=\gcps = \emptyset
  2. set A=A\aoi' = \aoi
  3. while μ(A)>ϵ\area(\aoi') > \epsilon:
    1. sample pp from D(A)\dist(\aoi')
    2. add pp to G\gcps
    3. set A=AX(p)\aoi' = \aoi' \setminus \X(p)
  4. return m

2.1Uniform Sampling in A\aoi

We construct a sampler for the distribution D(A)\dist(\aoi) by first sampling uniformly within the region A\aoi; and then converting this to the required distribution. The first step is a simple application of rejection sampling, and is described below.

Algorithm 3.Uniform Region Sample

Input. The region, A.\aoi.
Output. A point pAp \in \aoi picked uniformly at random.

  1. set bbox = bounding box of A\aoi
  2. while yet to sample:
    1. sample pp uniformly from bbox
    2. if pAp \in \aoi: break from loop; else continue
  3. return p

The bounding box may be any superset of the region A\aoi, where a uniform sampling is easy to obtain. For instance, we may use the minimum axis parallel rectange covering the AOI; to uniformly sample point in the bounding box, we sample a random number between the left and right bounds, and another between the top and bottom bounds. Note that the sample in step (i) of the loop is uniform in the bounding box, and thus is uniform in A\aoi when conditioned on falling inside the region as done in step (ii).

2.2Sampling From D(A)\dist(\aoi)

The sampler for D(A)\dist(\aoi) is a more sophisticated application of rejection sampling. Denoting area of A\aoi by μ(A)\area(\aoi), the uniform distribution over A\aoi has prob. density function (pdf):

(3)f(p)=1μ(A).f(p) = \frac{1}{\area(\aoi)}.

On the other hand, the distribution D(A)\dist(\aoi) has a pdf:

(4)g(p)=μ(X(p,A))μ(A)C(A),g(p) = \frac{\area(\X(p, \aoi))}{\mu(\aoi)\cdot C(\aoi)},

where C(A)C(\aoi) is the normalization constant:

(5)C(A)=Aμ(X(p,A))μ(A)=E[μ(X(p,A))].C(\aoi) = \int_{\aoi} \frac{\area(\X(p, \aoi))}{\area(\aoi)} = \exp \brac{\area(\X(p, \aoi))} .

Finally, let GsupA{g(p)f(p)}G \ge \sup_\aoi \curly{\frac{g(p)}{f(p)}} be an upper bound on the maximum ratio of the two distributions. Then, the following algorithm samples from D(A)\dist(\aoi) using the uniform sampler constructed earlier.

Algorithm 4.Coverage Sampler

Input. The region, A.\aoi.
Output. A point pp sampled from D(A).\dist(\aoi).

  1. while yet to sample:
    1. sample pp uniformly in A\aoi
    2. accept and break with probability g(p)Gf(p)\frac{g(p)}{G \cdot f(p)}
  2. return p

2.3Estimate C(A)C(\aoi)

The coverage sampler algorithm, described above, requires estimating C(A)C(\aoi), and using it, the coefficient GG. We estimate C(A)C(\aoi) by computing the expectation in equation (4) at uniform random samples in A\aoi. To estimate the confidence and accuracy of this estimate, note that μ(X(p))\mu(\X(p)) for pAp \in \aoi satisfies:

(6)0μ(X(p))πR2.0 \le \mu(\X(p)) \le \pi R^2.

Then, using Hoeffding’s Inequality the probability of the average of nn samples, denoted Cn(A)C_n(\aoi), not falling within ±ϵ\pm\epsilon of C(A)C(\aoi) is:

(7)Pr{Cn(A)C(A)ϵ}2exp(2nϵ2π2R4). \prob\curly{ \left\lvert C_n(\aoi) - C(\aoi) \right\rvert \ge \epsilon} \le 2 \operatorname{exp}\paren{\frac{-2n\epsilon^2} {\pi^2 R^4}} .

Thus, we get an additive ϵ\epsilon-approximation with probability 1δ1 - \delta by picking nn larger than π2R42ϵ2log(2δ).\frac{\pi^2 R^4}{2\epsilon^2} \log\paren{\frac{2}{\delta}}. This yields the following Monte-Carlo algorithm to estimate C(A)C(\aoi).

Algorithm 5.Estimate Ball Integral

Input. The region, A\aoi, error threshold ϵ\epsilon, and confidence parameter δ\delta.
Output. Estimate of C(A)C(\aoi).

  1. Compute n=π2R42ϵ2log(2δ)n = \left\lceil \frac{\pi^2 R^4}{2\epsilon^2} \log\paren{\frac{2}{\delta}} \right\rceil.
  2. set sum = 0
  3. for i = 1n1 \ldots n:
    1. sample pp uniformly in A\aoi
    2. set sum = sum + μ(X(p,A))\mu(\X(p, \aoi))
  4. return sum / n

Finally, we get a lower bound on the max ratio:

(8)G=πR2Cn(A)ϵG = \frac{\pi R^2}{C_n(\aoi) - \epsilon}

where CnC_n is the estimate using algorithm above, with ϵ\epsilon the error threshold.

2.4Tuning Confidence

Due to algorithm 5, our overall algorithm for placement is Monte-Carlo in nature. In particular, assuming we place k GCPs in total, the probability that the placement went by plan is 1kδ1 - k \delta.

We obtain a bound on kk as follows: the GCPs picked by our placement are at least distance RR from other GCPs. Thus, R/2R/2 radius balls around the GCPs are disjoint. Together, they may, at best, cover the aoi A\aoi padded by radius R/2R/2: A+B(0,R/2).\aoi + B(0, R/2). Thus:

(9)k4μ(A+B(0,R/2))πR2.k \le \frac{4\cdot\area(\aoi + B(0, R/2))}{\pi R^2}.

Thus, to get a 90% confidence algorithm, we may set:

(10)δ1/10kπR240μ(A+B(0,R/2)).\delta \le 1/10k \le \frac{\pi R^2}{40 \cdot\area(\aoi + B(0, R/2))}.

  1. A greedy strategy is known to yield poorer results on an average. ↩︎