Our GCP placement problem is a version of Geometric Set
Cover. Determining the minimum GCPs (and their
positions), is
hard
and thus we settle for an approximation algorithm. The
algorithm begins with the initial AOI, denoted A, and
iteratively places GCPs, while updating the remaining region
of the AOI to be covered.$
The iterative procedure picks the position of the next GCP
by randomly sampling from a distribution that prioritizes
positions that cover maximum area. The coverage of a
point p on A, denoted X(p,A) is defined as:
(2)X(p,A)=ΔB(p,R)∩A.
The distribution, denoted D(A), samples a point p∈A proportional to the area of its coverage, denoted
μ(X(p,A)).
We describe the placement algorithm assuming we can sample
from the distribution D(A). The sampling
sub-routine uses a few algorithmic techniques and is
described in the rest of the section.
Algorithm 2.GCP Placement
Input. The AOI, A; coverage radius R; and an
error threshold ϵ.
Output. Collection of GCPs, G.
- set G=∅
- set A′=A
- while μ(A′)>ϵ:
- sample p from D(A′)
- add p to G
- set A′=A′∖X(p)
- return m
We construct a sampler for the distribution D(A) by
first sampling uniformly within the region A; and then
converting this to the required distribution. The first step
is a simple application of rejection sampling,
and is described below.
Algorithm 3.Uniform Region Sample
Input. The region, A.
Output. A point p∈A picked uniformly at random.
- set bbox = bounding box of A
- while yet to sample:
- sample p uniformly from bbox
- if p∈A: break from loop; else continue
- return p
The bounding box may be any superset of the region A,
where a uniform sampling is easy to obtain. For instance, we
may use the minimum axis parallel rectange covering the AOI;
to uniformly sample point in the bounding box, we sample a
random number between the left and right bounds, and another
between the top and bottom bounds. Note that the sample in
step (i) of the loop is uniform in the bounding box, and
thus is uniform in A when conditioned on falling inside
the region as done in step (ii).
The sampler for D(A) is a more sophisticated
application of rejection sampling. Denoting area of A
by μ(A), the uniform distribution over A
has prob. density function (pdf):
(3)f(p)=μ(A)1.
On the other hand, the distribution D(A) has a pdf:
(4)g(p)=μ(A)⋅C(A)μ(X(p,A)),
where C(A) is the normalization constant:
(5)C(A)=∫Aμ(A)μ(X(p,A))=E[μ(X(p,A))].
Finally, let G≥supA{f(p)g(p)} be
an upper bound on the maximum ratio of the two
distributions. Then, the following algorithm samples from
D(A) using the uniform sampler constructed earlier.
Algorithm 4.Coverage Sampler
Input. The region, A.
Output. A point p sampled from D(A).
- while yet to sample:
- sample p uniformly in A
- accept and break with probability G⋅f(p)g(p)
- return p
The coverage sampler algorithm, described above, requires
estimating C(A), and using it, the coefficient G. We
estimate C(A) by computing the expectation in equation
(4) at uniform random samples in A. To estimate the
confidence and accuracy of this estimate, note that
μ(X(p)) for p∈A satisfies:
(6)0≤μ(X(p))≤πR2.
Then, using Hoeffding’s Inequality the
probability of the average of n samples, denoted
Cn(A), not falling within ±ϵ of C(A)
is:
(7)Pr{∣Cn(A)−C(A)∣≥ϵ}≤2exp(π2R4−2nϵ2).
Thus, we get an additive ϵ-approximation with
probability 1−δ by picking n larger than
2ϵ2π2R4log(δ2). This yields the following
Monte-Carlo algorithm to estimate C(A).
Algorithm 5.Estimate Ball Integral
Input. The region, A, error threshold ϵ,
and confidence parameter δ.
Output. Estimate of C(A).
- Compute n=⌈2ϵ2π2R4log(δ2)⌉.
- set sum = 0
- for i = 1…n:
- sample p uniformly in A
- set sum = sum + μ(X(p,A))
- return sum / n
Finally, we get a lower bound on the max ratio:
(8)G=Cn(A)−ϵπR2
where Cn is the estimate using algorithm above, with
ϵ the error threshold.
Due to algorithm 5, our overall algorithm for placement is
Monte-Carlo in nature. In particular, assuming we place k
GCPs in total, the probability that the placement went by
plan is 1−kδ.
We obtain a bound on k as follows: the GCPs picked by our
placement are at least distance R from other GCPs. Thus,
R/2 radius balls around the GCPs are disjoint. Together,
they may, at best, cover the aoi A padded by radius
R/2: A+B(0,R/2). Thus:
(9)k≤πR24⋅μ(A+B(0,R/2)).
Thus, to get a 90% confidence algorithm, we may set:
(10)δ≤1/10k≤40⋅μ(A+B(0,R/2))πR2.