Group testing

An animation of the false coin problem being solved for 10 coins, where the goal is to find the single coin that is lighter than the others. Many fewer than 10 tests are needed since at each stage, at least half of the 'good' coins are eliminated.

In combinatorial mathematics, group testing refers to any procedure which breaks up the task of locating elements of a set which have certain properties into tests on groups of items, rather than on individual elements. A familiar example of this type of technique is the false coin problem of recreational mathematics. In this problem there are n coins and one of them is false, weighing less than a real coin. The objective is to find the false coin, using a balance scale, in the fewest number of weighings. By repeatedly dividing the coins in half and comparing the two halves, the false coin can be found quickly as it is always in the lighter half.[lower-alpha 1]

Schemes for carrying out such group testing can be simple or complex and the tests involved at each stage may be different. Schemes in which the tests for the next stage depend on the results of the previous stages are called adaptive procedures, while schemes designed so that all the tests are known beforehand are called non-adaptive procedures. The structure of the scheme of the tests involved in a non-adaptive procedure is known as a pooling design.

Background

The field of (Combinatorial) Group Testing was introduced by Robert Dorfman in 1943. The motivation arose during the Second World War when the United States Public Health Service and the Selective service embarked upon a large scale project to weed out all syphilitic men called up for induction. Testing an individual for syphilis involves drawing a blood sample from them and then analysing the sample to determine the presence or absence of syphilis. However, at the time, performing this test was expensive, and testing every soldier individually would have been very cost heavy and inefficient.

Supposing there are soldiers, this method of testing leads to separate tests. If we have 70-75% of the people infected then this method would be reasonable. Our goal however, is to achieve effective testing in the more likely scenario where it does not make sense to test 100,000 people to get (say) 10 positives.

The feasibility of a more effective testing scheme hinges on the following property. We can combine blood samples and test a combined sample together to check if at least one soldier has syphilis. This is the central idea behind group testing. If one or more of the soldiers in this group has syphilis, then a test is wasted (more tests need to be performed to find which soldier(s) it was). On the other hand, if no one in the pool has syphilis then many tests are saved.

Modern interest in these testing schemes has been rekindled by the Human Genome Project.[1]

Types of group-testing algorithm

Group-testing algorithms can be described as adaptive or non-adaptive. An adaptive algorithm proceeds by performing a test, and then using the result (and all past results) to decide which next test to perform. On the other hand, in non-adaptive algorithms all tests are decided in advance. Although adaptive algorithms offer much more freedom in design, it is known that adaptive group-testing algorithms do not improve upon non-adaptive group-testing algorithms by more than a constant factor in the number of tests required to identify the set of defective items.[2][3] In addition to this, non-adaptive methods are often useful in practice because one knows in advance all the tests one needs to perform, allowing for the effective distribution of the testing process.

As well as adaptivity, all group testing algorithms are either combinatorial or probabilistic. A combinatorial algorithm finds all the defectives with certainty. In contrast, a probabilistic algorithm has some non-zero probability of making a mistake (i.e. deciding a defective item is non-defective or vice versa). It is known that zero-error algorithms require significantly more tests asymptotically (in the number of defective items) than algorithms that allow asymptotically small probabilities of error.[4]

Another class of algorithms are so-called noisy algorithms. These deal with the situation that with some non-zero probability, , the result of a group test is erroneous (e.g. comes out positive when the test contained no defectives). A noisy algorithm will always have a non-zero probability of making a mistake.

Formalization of the problem

We now formalize the group-testing problem abstractly.

Let the total number of items to be tested be , and an upper bound on the number of defective items be . The (unknown) information about which items are defective is described as a vector, , where if i-th item is defective and otherwise. is called the input vector.

The Hamming Weight of is defined as the number of 's in . Hence, where is the Hamming weight. The vector is an implicit input since we do not know the positions of the 's. The only way to find out is to run the tests.

Formal notion of a test

A query/test is a subset of . The answer to the query is defined as follows:

Note that the addition operation used by the summation is the logical OR, i.e.

.

Goal

The goal of group testing is to compute or estimate and minimize the number of tests required to do so. Here, we have no direct questions or answers. Any piece of information can only be obtained using an indirect query. Group testing is ultimately a question of combinatorial searching. The major issue with such problems is that the solution can grow exponentially in the size of the input.

Definitions and initial bounds

is defined as the minimum number of non-adaptive tests that one would have to make to detect all of defective items with a total of items. Similarly, we use to denote the number of adaptive tests that one would have to make to detect all the defective items.

Consider the case when only one person in the group will test positive. Then if we tested in the naive way, in the best case we would at least have to test the first person to find out if he/she is infected. However, in the worst case one might have to end up testing the entire group and only the last person we test will turn out to really be the one who was infected. Hence, . We also have that due to the fact that any non-adaptive test can be performed by an adaptive test by running all of the tests in the first step of the adaptive test. Adaptive tests can be more efficient than non-adaptive tests since the test can be changed after certain things are discovered.

In summery, .

Mathematical representation of non-adaptive algorithms

A diagram showing a group testing matrix along with associated vectors, x and y.
A typical group testing setup. A non-adaptive algorithm first chooses the matrix , and is then given the vector y. The problem is then to find an estimate for x.

Algorithms for non-adaptive group testing consist of two distinct phases. First, it is decided how many tests to perform and which items to include in each test. This is usually encoded in an binary matrix, , where is the number of tests and is the total number of items. Each column of represents an item and each row represents a test, with a in the (i,j)th entry indicating that the i-th test included the j-th item and a indicating otherwise. In the second phase, often called the decoding step, the results of each group test are analysed to determine which items are likely to be defective.

As well as the vector (of length ) that describes the (unknown) defective set, we introduce the vector , of length , describing the results of each test. A in the j-th entry of indicates that the j-th test was positive (i.e. contained at least one defective). With these vectors the problem can be reframed as follows: first we choose some testing matrix, , after which we are given . Then the problem is to analyse to find some estimate for .

These notions can be expressed more formally as follows. Let be a test and define such that , so the k-th test is described by . Let be the matrix with rows , so that . This construction is based on the grounds that non-adaptive testing with tests is represented by a -subset where .

Under this setup, is defined by the matrix multiplication relation: , where multiplication is logical AND () and addition is logical OR (). Here, will have a in position if and only if and are both for any . That is, if and only if at least one defective item was included in the i-th test.

Generalized binary splitting algorithm

The generalized binary splitting algorithm is an essentially-optimal adaptive group-testing algorithm that proceeds as follows:[5][6]

  1. If , test the items individually. Otherwise, set and .
  2. Test a group of size . If the outcome is negative, every item in the group is declared to be non-defective; set and go to step 1. Otherwise, use a binary search to identify one defective and an unspecified number, called , of non-defective items; set and . Go to step 1.

The generalized binary splitting algorithm requires no more than tests where .

Non-adaptive algorithms

Non-adaptive group-testing algorithms tend to assume that the number of defectives, or at least a good upper bound on them, is known.[7] We will denote this quantity . If no bounds are known, there are non-adaptive algorithms with low query complexity that can help estimate .[8]

Combinatorial Orthogonal Matching Pursuit (COMP)

An illustration of the COMP algorithm. COMP identifies item a as being defective and item b as being non-defective. However, it incorrectly labels c as a defective, since it is “hidden” by defective items in every test in which it appears.

Combinatorial Orthogonal Matching Pursuit, or COMP, is a simple non-adaptive group-testing algorithm that forms the basis for the more complicated algorithms that follow in this section.

First, each entry of the testing matrix is chosen i.i.d. to be with probability and otherwise.

The decoding step proceeds column-wise (i.e. by item). If every test in which an item appears is positive, then the item is declared defective; otherwise the item is assumed to be non-defective. Or equivalently, if an item appears in any test whose outcome is negative, the item is declared non-defective; otherwise the item is assumed to be defective. Of particular note here is that this algorithm never creates false negatives, though a false positive occurs when all locations with ones in the j-th column of (corresponding to a non-defective item j) are “hidden” by the ones of other columns corresponding to defective items.

The COMP algorithm requires no more than tests to have an error probability less than or equal to .[9]

In the noisy case, we relax the requirement in the original COMP algorithm that the set of locations of ones in any column of corresponding to a positive item be entirely contained in the set of locations of ones in the result vector. Instead, we allow for a certain number of “mismatches” – this number of mismatches depends on both the number of ones in each column, and also the noise parameter, . This noisy COMP algorithm requires no requires no more than tests to achieve an error probability at most .[10]

Definite Defectives (DD)

The definite defectives method is an extension of the COMP algorithm that attempts to remove any false positives. Performance guarantees for DD have been shown to strictly exceed those of COMP.[11]

The decoding step uses a useful property of the COMP algorithm: that every item that COMP declares non-defective is certainly non-defective (that is, there are no false negatives). It proceeds as follows:

  1. We first run the COMP algorithm, and remove any non-defectives that it detects. All remaining items are now “possibly defective”
  2. Next the algorithm looks at all the positive tests. If an item appears as the only “possible defective” in a test, then it must be defective, so the algorithm declares it to be defective.
  3. All other items are assumed to be non-defective. The justification for this last step comes from the assumption that the number of defectives is much smaller than the total number of items.

Note that steps 1 and 2 never make a mistake, so the algorithm can only make a mistake if it declares a defective item to be non-defective. Thus the DD algorithm can only create false negatives.

Sequential COMP (SCOMP)

SCOMP is an algorithm that makes use of the fact that DD makes no mistakes until the last step, were we assume remaining items to be defective. Let the set of declared defectives be . We say that a positive test is explained by if it contains at least one item in . The key observation with SCOMP is that the set of defectives found by DD may not explain every positive test, and that every unexplained test must contain a hidden defective.

The algorithm proceeds as follows:

  1. First carry out steps 1 and 2 of the DD algorithm to obtain , an initial estimate for the set of defectives.
  2. If explains every positive test, terminate the algorithm: is our final estimate for the set of defectives.
  3. If there are any unexplained tests, find the “possible defective” that appears in the largest number of unexplained tests, and declare it to be defective (that is, add it to the set ). Go to step 2.

In simulations, SCOMP has been shown to perform close to optimally.[12]

Bounds

In the combinatorial setup, there are a number of upper and lower bounds on t(d,n) (and t^a(d,n)), the minimum number of tests required to detect all defectives.

Lower bound on

Fix a valid group testing scheme with tests. Now, for two distinct input vectors and where , the resulting vectors will not be the same i.e. , were is the resultant for input vector . This is because two valid inputs will never give us the same result. If this ever happened, then we would always have an error in finding both and . This gives us that the total number of distinct results is the volume of a Hamming Ball of radius , centered about i.e. . However, for bits, the total number of possible distinct vectors is . Hence, . Taking the on both sides gives us .

Now, . Therefore, we will end up having to perform a minimum of tests.

Thus we have proved that

Upper bound on

.

Since we know that the upper bound on the number of positives is , we run a binary search at most times or until there are no more values to be found. To simplify the problem we try to give a testing scheme that uses adaptive tests to figure out a such that . The related problem is solved by splitting in two halves and querying to find a in one of those and then proceeding recursively to find the exact position in the half where the query returned a . This will take time or if the first query is performed on the whole set, it will take . Once a is found, the search is then repeated after removing the co-ordinate. This can be done at most times. This justifies the running time of . For a full proof and an algorithm for the problem refer to: CSE545 at the University at Buffalo

Upper bound on

This upper bound is for the special case where i.e. there is a maximum of 1 positive. In this case, the matrix multiplication gets simplified and the resultant represents the binary representation of for test . This gives a lower bound of . Note that decoding becomes trivial because the binary representation of gives us the location directly. The group test matrix here is just the parity check matrix for the Hamming code.

Thus as the upper and lower bounds are the same, we have a tight bound for when . Such tight bounds are not known for general .

Upper bounds for non-adaptive group testing

For non-adaptive group testing upper bounds we shift focus toward disjunct matrices, which have been used for many of the bounds because of their nice properties. It has been shown that . Also for upper bounds we currently have with a strongly explicit construction. So the smallest upper bound and biggest lower bound they are only off by a factor of , which is fairly small.

Separately, we can see that the current known lower bound for is already a factor of larger than the upper bound for .

See also

Notes

  1. A bit more precisely if there are an odd number of coins to be weighed, pick one to put aside and divide the rest into two equal piles. If the two piles have equal weight, the bad coin is the one put aside, otherwise the one put aside was good and no longer has to be tested.

Citations

References

This article is issued from Wikipedia - version of the 11/30/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.