in Mathematics, Programming, Technology | Tech Notes

Introduction

One of the big challenges with Prime Sieves is double counting. The largest contributing factor is that primes occur at sporadic intervals that are resistant to combinatorial solutions. This makes elimination of composites irregular and thus prone to double counts. This tech note presents an alternative sieve approach which leverages the periodicity with which Twin Prime candidates are eliminated to reduce the amount of double counting.

If you want to skip ahead a Functional implementation of this algorithm is available in Haskell and can be found here on BitBucket.

The structure

Assuming the reader is familiar with the concept of Twin Primes we begin by considering an appropriate data structure to represent Twin Prime candidates. By eliminating the congruence classes modulo 2 and 3 we create a sequence of potential Twin Prime pairs. These begin with (5,7) and increase by (6,6) at each interval, i.e. (5,7),(11,13),(17,19),(23,25), etc.

In an imperative language we could create two bit-packed arrays. If the nth index in the left array is True, then 5+6n is prime and similarly 7+6n for the right array. In the functional implementation, we recursively build a list of optional pairs (Maybe Integer,Maybe Integer). We move through the list of pairs leaving tombstones (Nothing) at each eliminated candidate which are picked up as we move through the infinite list. Using Haskell’s lazy evaluation we create the tombstones as we evaluate the list.

Eliminating composites

For any prime p we can create a list of p-length sequences of Twin Prime candidates which start after p2. E.g. at p=5:


(29,31) (35,37) (41,43) (47,49) (53,55)
(59,61) (65,67) (61,63) (67,69) (63,65)

For every p, each p-length sequence eliminates precisely two Twin Prime candidates. At intervals of p2 we eliminate a right-hand candidate. At intervals of either p2+2p or p2+4p it will eliminate a left-hand candidate. For primes that were themselves left-hand candidates they will make the elimination at p2+2p, while right-hand candidates make eliminations at p2+4p. This is derived from modular arithmetic because left-hand candidates are congruent to 2 modulo 3, while right-hand candidates are congruent to 1 modulo 3.

The sieve queue

From here we can begin to sieve our Twin Prime candidates to create our collection of primes. As we find primes we put them into a sieve queue. Each prime in the queue will be activated once we reach p2 in our collection. As we get to a particular pair (p,q) we check to see if either of them have been eliminated or not (using either our pair of arrays our list of pairs with tombstones). Every left hand pair we find is automatically prime and need not be evaluated further.

If we find a viable right-hand candidate we check first to see if it is less than p2. If it is we simply add it to our collection of primes and also into our sieve queue. If it is equal to p2 then this is the next location of our tombstone. We then pop p from our queue and apply its sieve. At every p2 interval we place a tombstone at the right-hand candidate.

Then we work out if it is congruent to 1 or 2 modulo 3, then eliminate every left hand candidate at either p2+2p or p2+4p. This calculation tells us which number to eliminate but not which index. To avoid making a comparison of every left-hand candidate in the p-length interval we instead calculate the index to eliminate using: (((m * p) + 2) / 6) where m represents the “magic number” 2 or 4.


Let i be the index of the Twin Prime candidate p2
Let j be the index of the first left-hand elimination at
i + (((m * p) + 2) / 6)

Place tombstones at every i+kp right-hand candidate, and at every j+kp left-hand candidate.

How it compares

The Haskell primes library has a clever purely functional implementation of the Sieve of Eratosthenes. The twin sieve described above performs slightly better up to about the 1000th prime. From that point onwards the sieve in the primes module begins to outpace this implementation. Finding the 100,000th prime is about 5 times faster in the primes module, while finding the 1,000,000th prime is about 20 times faster.

In the BitBucket repo there is also an imperative implementation using Haskell where the mutability is managed through the ST Monad. Calculating the 10 millionth prime using the ST implementation is marginally faster than using the primes module. But the imperative implementation uses an Array and doesn’t know what the real bounds are of the nth prime. Instead it sets an upper bound of (n * log n) + (n * (log ((log n) - 0.9385))). This upper bound tends to get worse the larger the prime – so the imperative version actually crosses off substantially more candidates than it needs to compared to the purely functional version and still comes in faster.

However this is an unfair comparison of the implementation in the primes module. It is possible that an imperative implementation of the Sieve of Eratosthenes is actually faster than the Twin Sieve described above. While I’m a big fan of purely functional algorithms, there is something to be said about the efficiency of directly manipulating bits.

The purpose of this algorithm wasn’t necessarily to design a more efficient Prime Sieve but instead to see if this approach was viable. As a result no proper benchmarks were conducted. However, below are fairly results of finding the 10 millionth prime using either the ST version of the Twin Sieve or the primes module.


$ time main primes 10000000
179424673

real 0m33.036s
user 0m0.000s
sys 0m0.062s

$ time main twinst 10000000
179424673

real 0m24.735s
user 0m0.015s
sys 0m0.063s

Have you got a comment, criticism or suggestion? Contact Rick on or