At the core of Kryptonite' s code is the substitution of letters for numbers represented by symbols. First, each letter of the alphabet was assigned a number. We randomly chose the letter "M" and began assigning the numbers numerically starting from "M" and skipping every other letter until every letter had a specific number. Then, a number of symbols were constructed to represent those numbers:
1 = *
3 = ?
6 = !
11 = @
20 = +
The number of each letter was represented by the fewest possible number of symbols. The complete alphabet is as follows:
A = 08 = !**
B = 22 = +**
C = 09 = !?
D = 23 = +?
E = 10 = !?*
F = 24 = +?*
G = 11 = @
H = 25 = +?**
I = 12 = @*
J = 26 = +!
K = 13 = @**
L = 14 = @?
M = 01 = *
N = 15 = @?*
O = 02 = **
P = 16 = @?**
Q = 03 = ?
R = 17 = @!
S = 04 = ?*
T = 18 = @!*
U = 05 = ?**
V = 19 = @!**
W = 06 = !
X = 20 = +
Y = 07 = !*
Z = 21 = +*
Each letter in the plaintext was encoded solely by using the symbols (*, ? !, @, +). The letter preceding these symbols in the code is complete nonsense and has no significance in decoding the text.
One advantage to this system is that the length of each group of symbols representing a character of the plaintext would not remain constant. Some letters were represented by only one symbol while others were represented by four symbols. One disadvantage is that there seemed to be no subtle way to represent punctuation marks without revealing where one sentence began and another ended nor a way to conceal word length. In addition, making distinctions between capital letters and normal letters would have complicated the system. Clearly, representing these aspects of word structure presented formidable obstacles. As a result, we elected to ignore them and encode the text in a way that did not represent these things.
The Null Set
For The Null Setís cryptogram, we began by doing a few basic statistical breakdowns and then proceeding to draw conclusions and make assumptions from there. Due to the limited time, resources, and patience of our group, these assumptions were often quite large and unfounded.
First, a total character count as well as a count of each individual character was performed. In total, there are 6655 characters not counting spaces. We assumed that the plaintext in question was not much more than 500 words (humans, being naturally inclined towards sloth, would do the minimum required) and that the average length of a word is 5 characters. This led us to believe that the total characters contained in the plaintext would approximate 2500. The disparity between the plaintext and the ciphertext is striking; it suggests one of two things. The Null Set may have cleverly inserted a large quantity of ìjunkî into the enciphered message, making it difficult to sort the proverbial wheat from the chaff. The other possibility considered was that there was not a direct 1:1 correspondence between plaintext characters and ciphertext characters.
Because sorting ìjunkî from important material we decided to rule this possibility out because it would be too difficult to decipher if it were true. While this is not good logic, it makes for effective heuristics in this particular case. Assuming this, we decided to group the text in pairs of two and proceed from there.
The problem with this approach is that there are an odd number of characters in the text. This proved to be an insurmountable problem, though were still able to explore the text and draw other conclusions. First, the crazed use of symbols was a mere ruse. By converting each of the seven characters into a corresponding number in order of appearance, the text is simplified and much more pleasing to the eye.
In addition, this numerical substitution uncovered a less than shrewd attempt at throwing ìjunkî into the message. The pattern 1112131415 (spaces omitted) shows up thirty-two times in the message. This was thrown out entirely, since it was attributed to deception rather than a naturally occurring series in the text, due to itís sequential nature.
Other interesting, though fruitless, facts were gathered. The series 235621223115623624154232 appears twice in close succession in the text. When 444 occurs, it is always accompanied by a 1 on either the right or the left side, but never both. Only sixteen occurrences of the 7 exists, and it is preceded by a 4 in fifteen of those cases. Several four character sets occur more than fifteen times in the text. Most notable among these are 2231 and 1321 with thirty-eight, followed closely by 3132 with thirty-five, 1535 with twenty-five, and 1235 with twenty-four.
The significance of these peculiarities is unclear, though they should lead to some insights into the nature of the text if examined more carefully. While they may not directly give away the content, they show promise in revealing the structure of the plaintext.
With more time, we believe that this code may have been partially broken. However, this is mere speculation. What was needed in this case was a resource that would take the menial drudgery out of the work and leave the humans with primarily analytical work. Even with extensive use of word processor substitution, formatting, and counting, the task was still a formidable one.
The Enigma Code
At first glance, the Enigma code appeared to be the simplest code to break because we knew that the numbers could not be a cipher; certain numbers or number sequences had to correspond to a distinct letter in our alphabet. Moreover, the time involved in encoding the plaintext of the published writing would make it more likely that the code did not involve an overly difficult transposition of the letters. What this means is that the words of the plaintext probably would not be mixed, nor would they be written backwards. Also the words themselves would probably not be rearranged (they would have to be arranged in a set pattern, an extremely time-consuming task for the encoder). Therefore, we went about our decipherment analytically and with certain assumptions about the encoding; assumptions are necessary, for they allow the decoder to make intuitive leaps in judgment about the nature of the encoding.
The first action undertaken by our group was that of counting the total number of characters in the encoded text. We did this because we assumed no more than 500 words of plaintext had been encoded (the time factor) and that the total number of characters should be around 2500. We believed that, if there is a direct one to one correspondence between the plaintext alphabetic letter and the symbols after encoding, then 2500 characters (approximately 5 characters per word) would be a reasonable amount of characters. Enigma has 8400 characters, a number too large for a direct correspondence. Therefore, either more than one character in the code could stand for a letter or there are filler characters (characters used solely to confuse). We believed that fillers were in the code, and that more than one character in the code stood for a letter of the plaintext.
The main problem with the code is that the Enigma code is not in paragraph form and is separated into groups of 5 characters, thus words have to be recreated after the filler is separated from the plaintext. We attempted to separate the filler from the plaintext, but we were ultimately unsuccessful. Our first step in separating the filler from the plaintext was determining which four-number combinations appeared most frequently in the Enigma code. They were:
After some discussion, we came to the conclusion that four-number combinations were probably not equal to one letter of the plaintext; it was most likely two-number combinations that were actual letters of the plaintext. Using our earlier assumption about four-letter combinations, we were able to conclude that the number 22 was most likely a vowel due to it's frequent use. Moreover, the number 91 appeared most frequently and by the Poe method of cryptanalysis, we concluded that 91 must be the letter E. Also, we believed that the zero or six was probably a filler in certain situations, but we could not determine whether zero could be part of a letter in the plaintext or not. Unfortunately, we were not able to decrypt the Enigma code.
The Enigma code proved too challenging to solve because we believed that transposition was used in encrypting the plaintext. Unless the pattern could be determined, the enigma code would be almost impossible to solve, especially if there were multiple layers of encryption and if there were some letters that had more than one number combination. Oonly after months of exhaustive cryptanalysis would this code prove breakable with our meager resources.
At first glance the Crypts code seemed almost unsolvable. Despite the groupís reference to the Los Angeles gang, their code far more cryptic than the gang-sign of their namesake. The Crypts code was, in fact, similar to the Enigma code used by the Nazis during World War II, and anyone familiar with the decryption of that particular code knows how difficult it was to break. Luckily, we knew that the Crypts would not design sophisticated machinery (remember, one of their members is from Minnesota) to design a code, and we decided to decrypt the code in the most analytical manner possible.
Unfortunately, the appearance of the code makes it extremely difficult to decode; in fact, it is almost impossible to decode. The symbols in the encryption are letters of the English, but obviously they do not correspond directly to a letter of the plaintext, thus rendering the method of counting the frequency of a certain letter impossible. However, we did begin by counting how many times certain letters did appear. A brief list includes:
All the other letters were counted similarly. However, we were not able to make conjectures about the nature of the code as simply as the protagonist did in Poeís The Gold Bug. In Poeís story, the main character knew that the symbols had a direct correspondence to the symbols of the encrypted message and was able to make hypothesis about certain words and letter combinations; we did not have this advantage. Poeís message was also written in some form of correct syntax with a somewhat logical progression of words; we did not know if the code had been written backwards or with words placed illogically but in a uniform method. Therefore, the code was almost impossible to break, but we did make some educated speculations about the nature of the code.
Once again, our first step was counting the number of characters in the cipher-text; in this particular code there were 2938 characters, thus making us believe that there was a one to one correspondence between a letter in the plaintext and letter in the code. However, this revelation did not make decryption any easier. Immediately, we believed that the code was a substitution cipherówhich meant that a phrase or group of letters could be written underneath an existing alphabet as a means of encryption. In this type of system, on letter of the substitution cipher could stand for more than one letter in the plaintext. This makes that task of deciphering difficult even for someone with the key, for they must know basic principles of the English language to decrypt the code. What also made decryption difficult was that there was not grammar or syntax to the code; all the letters were evenly spaced except for a few instances where two letters were close to one another. We did not know what to make of these close letters, and eventually determined that the close letters were more a problem of computer formatting than an actual attempt to trip up our decryption. However, they did not need to trip us up any further than they already did, a thorough decryption was impossible.
This code was an instance where having a computer would have proven very useful. A computer could have made a fast decryption of a simple substitution cipher, because it could have searched through many possibilities in a short time. We believe that this code could have been deciphered if there was sufficient time to compute all the possibilities and then determine, by grammatical laws, what was contained in the encoded text. However, going through all the possibilities (I believe it is 26!/25 or something similar) would have been nearly impossible. The code, we believe, was not very ingenious, but it was difficult nonetheless.
From "The Urbanization of Humor" by Walter Blair
In no other type of contemporary American writing are the changes in American society during the early years of the century more apparent than in our humor. Our chief humorists until the nineteen twenties were all rustic or western. Since the nineteen twenties, by contrast, our famous humorists have been urban.
Created by Benjamin Franklin early in the eighteenth century, Poor Richard Saunders led a procession of prodigiously popular humorous characters who, for two centuries, delighted nationwide audiences and often even shaped political decisions. Poor Richard had the essential traits of these characters. He was a countryman, and the best-loved humorous pundits showed by their dialect and their subject matter their farm or frontier origins. Poor Richard was uneducated by so acute and so experienced in the ways of the world that he could make witty comments which a nation worshipped what it called "horse sense" vastly appreciated. So it was with later vernacular humorists. There were Davy Crockett, coonskin frontier congressman, and Lowell's Yankee farmer Hosea Biglow, who his creator said personified "common-sense vivified and heated by conscience." There was H. W. Shaw's Ohio bumpkin, Josh Billings, whose creed was, "You have got to be wize [he spelled it with a z] before you can be witty." There were Mark Twain, creator of the Missouri ragamuffin Huck Finn and if the Connecticut Yankee, Hank Morgan--both, as Mark said, "ignoramuses" so far as book learning was concerned, but both so blessed with gumption that they could communicate shrewdly.
The last giant in this procession of homespun humorists was Will Rogers, Oklahoma-born cowboy. "I've been eatin' pretty regular," said Will, "and the reason . . . is because I've stayed an old country boy." Thanks to varied media--syndicated newspaper columns, moving pictures, the radio--Will became more widely known than any of his predecessors in the tradition. But with his death in nineteen thirty five--a symbolically appropriate one in an airplane crash--the procession ended. Since then some old-time humorists have won some prominence. An exceptional Harry Golden, aided by the timeliness of many of his preachments, could score a remarkable success--three best-selling books of humorous commentaries in a row. Nevertheless, during the quarter century since Rogers' death, no humorist of his type with an iota of his prominence has arisen.
During his last years, Rogers was actually an anachronism who offered proof that old humorous traditions die hard. Widespread education had led many Americans to believe that book learning, which old-time humorists had scorned, was a better guide to wisdom than horse sense. The incongruity between ignorance and insight, our oldest joke, was no longer sure-fire. The rural and frontier civilizations which had nurtured dialect humorists were being replaced by an urban civilization lacking respect for men who talk in the vernacular. Already, humor of a new and very different sort was burgeoning.
The magazine which would be largely responsible for the rise of this new humor, even before its start in nineteen twenty-five, disavowed interest in rural and small-town readers so important as part of the humorists-audience in the past.