Home» The Controversy Bookmark and Share

The Controversy

The controversy arises from a variety of arguments. Here are some.

(1) The text of the Torah we have today is surely corrupt. So if there were any encoding, the encoding would have been destroyed by copying errors that inadvertently crept in over the centuries. For more click here.

(2) The ELSs and their patterns that can be found in the Torah text can be found in any long enough text. For more click here.

(3) The technical methodology employed in Torah code experiments is flawed. For more click here.

(4) Any apparent success in a Torah code experiment is due to a behind the scenes manipulation of the data gathering and preparation of the experiment. For more click here.

(5) The popular Torah code book by Michael Drosnin shows tables that are not based on experimental protocol, improperly uses Torah code tables to make predictions, and is indeed somewhat sensationalist in character. Drosnin's work is clearly not scientific and therefore, all the Torah code work is not scientific. For more click here.

(6) The entropy of the Torah skip texts do not indicate that they carry any more information than a monkey text. Therefore, there cannot be any encoding. For more click here.

An equidistant letter sequence, called ELS for short, is a sequence of equally spaced letters in the text not counting spaces and punctuation marks. The sequence of the letter positions form an arithmetic progression.

Entropy is a measure of the average uncertainty about what the value of a random variable is before observing it. Entropy is measured in bits.

An entropy of H bits means that in order to provide information about the value of the as yet unobserved random variable, it will require, on the average, an H bit message. For example, an H bit message specifies a choice of 1 out of 2H possibilities.

One way to explain the meaning of the H bit message is by the following game played between person A and person B. Person A samples at random a value v of the random variable X. Person B knows what the probability is of random variable X taking any of its values, but does not know the value v that person A has sampled. If person B were to use his knowledge of the probability function of random variable X in the most effective way possible, it would take person B, on the average, 2H guesses to correctly guess the value v that person B had sampled.

If P denotes the probability function of a discrete random variable X which takes possible values {x1,...,xN} and H(X) denotes the entropy of the random variable X, then the entropy of the random variable X is minus the expected value of log to the base 2 of P(X)

H(X) = -E[log_2 P(X)]= - ΣNn=1 P(xn) log_2 P(xn})