The argument has two premises. The first premise
is that the Torah text we have today has suffered transmission errors. The second premise
is that if there were transmission errors, any ELS code would be destroyed.
To make the argument compelling, we may read the biblical scholars who have written essays regarding
the variants found in the Torah scrolls. See the essays by Professor
Jeffrey Tigay, Professor Menachem Cohen , Professor
Sternberg and the book
The Bible Code Myth,
an E book by Dr. Michael Heiser. Professor Rips made an explicit
to Professor Sternberg.
Also read Rabbi Dovid Lichtman's
essay which estimates that the
method used by the Mesoretes to reconcile variants, produced
a text that differs by no more than 12 letters from the first text of Moshe.
What is interesting about the copying error argument is that whether or not there were transmission errors
in the text that we use today, it is irrelevant to the statistics. The experiments that succeed
with small p-values succeed with the current text. From the results of a successful experiment
one cannot statistically conclude anything other than something unusual has happened. Thus, there cannot
be any statistical inference regarding the accuracy of the current text. Therefore, there is no
possibility of an inference that would be inconsistent with the premise that there were transmission
There is also a viewpoint that the intelligence, which transcends
time, that created the Torah text, did so with an encoding that develops to be detectable after the transmission
errors would occur. This viewpoint is excluded by the reasoning of the transmission error argument:
that if there were transmission errors, any ELS code would be destroyed. Perhaps the opposite happens.
If not, perhaps, one can understand this argument for the longer skip ELSs
but not for the shorter skip ELSs. Ironically, it is the shorter skip ELSs, in the sense of low rank skips,
that are hypothesized to be involved in the encoding.
If one assumes that the text was created with an encoding and that the estimated number of transmission
errors is high enough
to destroy the code, then for the argument to have force, it needs a quantification
that measures the change in p-value as a function of the number of errors
in the text and as a function of the test statistic computed for estimating the p-value. Then if
the number of estimated text transmission errors is such that when substituted into
the parametric experiment result, the small p-value becomes large and insignificant,
then the argument would be more compelling. But such a
parametric experiment is something that the Biblical scholars have not done and have no expertise in
doing. So one wonders why would they be making such an argument to begin with?