Pages

Tuesday, November 30, 2010

Section 16.5; Due December 8

Difficult: The steps of the cryptosystems here are all rather difficult to follow, I look forward to seeing examples in class. I'm having trouble understanding how the repeated addition on the elliptic curve should be as difficult to find the coefficient "a" for as it is difficult to solve a discreet log problem. Is there a proof like with the Diffie Helman problem to show that this is the same as solving a discreet log?

Reflective: Wow, last section! This has been a great class, I've really enjoyed all that we have done and learned. It was great to get another practical use out of something as abstract as an elliptic curve. Is there also some RSA type algorithm that you can do with an elliptic curve? I would suppose not since we need to work mod a prime, as in section 16.2, but it seems there might be some other way. Maybe using two elliptic curves mod p_1 and q_1.

Wednesday, November 24, 2010

Section 16.4; Due December 6

Difficult: The discussion of addition with these elliptic curves was much more confusing than before. In the example, it seemed to say that (0,0)+(1,1) did not have a solution, then it added (0,0) again to get the identity (which I do not see), and then suddenly there was a solution because (0,0)+(1,1) had to be -(0,0)? It also says that elliptic curves with characteristic 2 are important to cryptography, but I didn't really see where it dicussed why these would be best.

Reflective: The mash up of finite fields with the new (to me) group structure on elliptic curves was more difficult to grasp than I thought it would be. It seems not too different that the curves mod p, but it is still difficult. You would think there would be more description in the book, this was a short section.

Section 16.3; Due December 3

Difficult: So we can just pick any elliptic curve and it will probably factor the number? How can we be sure within a good probability that we have a curve that will work? It is interesting that we can just keep picking curves and one may work though, not like the p-1 method. I also did not understand how the singular curves were worked with.

Reflective: So the p-1 and brute force algorithms are just special cases of the elliptic curve algorithm? Never thought of it that way before, of course. Or is it just that there are special cases of the elliptic curve algorithm that are the same as the p-1 and brute force algorithms? The moniker "smooth" for numbers with only small prime factors caught my eye as I have been working with "smooth" functions on manifolds a lot recently. Is there some relation in this terminology? I couldn't think of one.

Section 16.2; Due December 1

Difficult: I do not quite understand the idea of encrypting messages with an elliptic curve mod p. So, you pick your message and then attempt to build your curve based on using your message as a point? Or do you have the curve first and then try to guess a point on the curve based on your message by adding things to your message? If it is the second method, how would the person you are sending the message to know which part is junk? Or even if you did not have to add any junk?

Reflective: The elliptic curve mod p doesn't seem that much more difficult than the elliptic curve without modular arithmetic. The number of points on the curve being so unpredictable is interesting, but I suppose it is just another "finding square roots mod n" problem. If using a prime p, do we need to ensure that p=3 (mod 4) as we did before?

Section 16.1; Due November 29

Difficult: Why do they define addition of points on the curve as the reflection of the third point where the secant line hits the curve? It seems like it would be more natural to just define the sum of two points on the curve to be the point where the secant line hits the curve. Is there some sort of computational benefit to adding in the reflection that I just do not see? The formulas seem to follow easily from the definition given, but it would seem to me just as easy to come up with formulas for the nonreflected point.

Reflective: Putting a group structure on points of a curve is particularly interesting to me in light of the research I have done in topology. This structure does not seem to be quite as natural a structure as what I am used to, so it is more interesting to think of what it all means. For example, I spent some time thinking about the curve with two components (x(x-1)(x+1)) and what the definition of addition does with that group. It looks like the points on the circle would generate the full group, as you can obtain any point on the line from two points on the circle (including the point at infinity). Possibly they do not generate though, since I see no way you could get a point on the circle from one or two points on the circle.

Monday, November 22, 2010

Section 2.12, due November 23

Difficult: I do not fully understand the discussion of the permutations used to break Enigma. The text mentions that letters never mapped to themselves in the machine, but the 1 cycles in the examples seem to indicate that they could. Also, how exactly did deciding the cycles reduce the problem to a substitution cipher? The initial keys were still quite random, so it seems that there would be trouble figuring out the permutations you need. How many messages were needed to work out which permutations you were looking at for a day?

Reflective: This is just an extremely interesting section. I enjoyed learning more about the enigma machines. I had no idea that the British then used them to spy on former colonies, that is quite interesting. Good to see governments haven't changed quite so much as we might think. I was disappointed that the links you posted did not work for me, so I could not read the extra information from the NSA.

Thursday, November 18, 2010

Sections 19.1 & 19.2; Due November 19

Difficult: I found it difficult to understand what physicists (and for that matter chemists) believe about photons. Do they really think that a photon does not have a state until it is observed? When did science become a religion? Anyway, that's not really answerable, I just get frustrated when people try to explain something with illogical conclusions. Also, there wasn't much to be confused about in this section really. The theoretical quantum bit key exchange makes sense. They talk about methods to use to transmit these keys (fiber optic etc.) and the distance they can send, does that mean that the method is not really theoretical any more? Also, I remember in my undergraduate chemistry discussing using the chirality of a molecule to encode messages, is there any more information on this method?

Reflective: It seems that just message-sending type cryptography isn't the meat of the quantum computing idea. Isn't the way computers communicate with their internal parts a form of cryptography? They must encode electronic signals and decipher what they mean. Maybe it would be better to be focusing on efficient ways to encode these messages rather than just secure message sending that could be accomplished with a flashlight and some polaroid filters.

Tuesday, November 16, 2010

Sections 14.1 & 14.2; Due November 17

Difficult: I do not entirely understand the use of the hash function in the zero knowledge protocol. Why is it beneficial to use a hash function here when Peggy will still have to give full numbers for her answers? How would a chip be made unreadable? Wouldn't that mean that the machine that Peggy puts the card in would not be able to read the chip either?

Reflective: More importantly, what is to prevent Eve from using the algorithm with the $5 wrench described on the test to find Peggy's numbers? Seriously though, I think I see the use of these protocols. Since not even Victor will know the answers that Peggy uses (his computer throws them away) he cannot give away that information. It seems like it would be easy to adapt one of these protocols using El Gamal also, which might be easier to compute than RSA types like they used.

Saturday, November 13, 2010

Sections 12.1 and 12.2; Due November 15

Difficult: I do not completely understand the Legrange interpolation polynomial, or why it works. It seems like it is not so important anyway, it would be a simple to solve the system that was as it would be to solve the system linearly, as far as I can tell. The rest of these sections made plenty of sense.

Reflective: Yeah, I'd love to see someone work one of these schemes into a spy movie.
--"Quick, we need to launch the nuclear warhead. Get the president, the general, and the secretary of state!"
--"Ok gentlemen, do you remember your point in three space?"
*The next ten minutes are spent setting up a matrix and solving the resulting system of equations by hand because the aliens knocked out the mainframe so only the launch computer works...during this time all of the audience leaves...Except for three kids from a college math class*. It is a neat idea though, I like the ideas on how to have multiple "secret holders" that have to be together to get the secret.

Thursday, November 11, 2010

Exam 2 Review Questions; Due November 12

Which topics or ideas do you think are the most important out of those we have studied?

I think that the most important ideas for these sections are the myriad methods we have studied to factor into primes. While I do not think that we can be asked to use the methods in a serious manner on the test, the ability to factor into primes was needed for breaking all of the new crypto systems we learned in these sections, so we will have to at least be able to explain the theory behind each method. Besides that, we have two new encryption methods (three if you count the Diffie Hellman key exchange), many electronic signature methods, and Hash functions. Oh my gosh, the hash functions!

What kinds of questions do you expect to see on the exam?

I think that we will see many theoretical questions. I expect to be asked how to factor into primes by multiple methods, how to find the square root of numbers in modular arithmatic, and how to use these methods to break RSA and other systems. I expect something on the quadratic sieve, probably just having to explain how it works. I expect to have to be able to find simple discrete logarithms (probably ones small enough to list out the powers for) and explain more sophisticated methods to find discrete logs like the Phlig-Hellman algorithm and birthday attacks. I expect questions on probability of the birthday attack succeeding in various (possibly non-cryptographic) situations. Finally, I expect some questions on hash functions and using them for digital signatures. Mainly I expect things like the definition of a hash function and about strongly collision-free and preimage resistance.

What do you need to work on understanding better before the exam?

I need to work on hash functions some more. I have trouble following the diagrams for the one true hash function we have discussed. Also, with so many factoring and discrete logarithm methods, I get mixed up as to what I should be doing where. Oh, and the prime tests. I almost forgot about them, so I need to look over them for sure.

Are there topics you are especially interested in studying during the rest of the semester? What are they?

I was hoping that we could look at some of the applications of these systems we have been studying. We have discussed how DES is used in television transmission and some uses of systems like one-time pads. But how do digital signatures work in the real world? What happens when I buy something online to keep my data safe? Other than that, I don't mind just learning new cryptosystems and how to break them.

Monday, November 8, 2010

Sections 8.3 & 9.5; Due November 10

Difficult: Am I allowed to put "Section 8.3" here and leave it at that? Probably not. SHA-1 is quite confusing though, which is probably its point in the first place. I have a hard time following the steps, partially because of all of the newly defined operations, and partially because of the sudden introduction of Hexadecimal, with which I am only vaguely familiar. The diagrams (8.2 and 8.3) are definitely not clarifying matters. I see that all of the important parts come in step 3 and I understand how they adjust the message to a specified length where it can be split into 512 bit blocks though.

Reflective: While the text does discuss many ways in which the DSA is more secure than ElGamal, it does not mention birthday attacks on this algorithm. I suppose this is because it is assumed to be secure from a birthday attack as long as you choose a large enough prime? Or is it because there are two "coded" parts. You would need to have the right hash first, and then work out the signature from there, so that complicates matters too.

Saturday, November 6, 2010

Sections 9.1-9.4, Due November 8

Difficult: I was a little confused by the hash signature. Is the hash function itself the signature (since they are difficult to duplicate), do you take the hash of the message and then sign it, or do you sign the message and then take the hash of the signature? Other than that these sections were pretty accessible. We have discussed all of the encryption algorithms used, and there is only a slight change in usage for signatures. It is all well explained.

Reflective: I appreciated the discussion of how to defend against the birthday attack on hash functions. It seems like such a simple precaution, just change one piece and you probably win. We still haven't really discussed how exactly these functions are used for securing documents (signatures obviously, but I know that I have no personal encryption function to sign things with) so it makes me wonder how these are used when I make a purchase online. Also, I am never given the option of changing a document online, so how can I know that the document hasn't a birthday "double"?

Monday, November 1, 2010

Sections 8.4-8.5 & 8.7; Due November 5

Difficult: First off, they do not explain, but it seems that must stand for concatonation? Possibly this was in a section that was not assigned yet, or maybe I just forgot the notation from an earlier assignment. The Birthday attack seems very interesting, especially the paradox part (that you can find two matching birthdays with high probability but not match a given one). It looks like they violate this paradox with the birthday attack on discrete logarithms, however. I understand how they are using the matching of two different groups instead of trying to match a single value, but I do not understand why a similar method cannot be used in many other situations where you want to match a given value.

Reflective: Frequent discussions of how we cannot truly create a random sequence without natural phenomena has lead me to wonder if they have been working on this problem in a manner besides trying to find more and more random mathematical functions. Most of the natural phenomena mentioned as random are inherently slow (such as flipping coins or counting clicks in a second). However, there are many random natural phenomena that occur quickly that can be read with modern machinery. For example, I seem to remember from my chemistry classes that vibrations in a crystal lattice are random, but happen many times in a millisecond. We can "read" these vibrations even in a small sample of crystal, so why could we not use this as a speedy random sequence generator. If the number of vibrations in a millisecond (or less if you need) is even, you get a 0, odd you get a 1. Small crystals and the equipment to read their vibrations could then be installed in your desktop. I may be wrong with the specific example, but I am sure there are many random, small scale, fast phenomena that we know about now.

Sections 8.1-8.2, Due November 3

Difficult: I do not understand the difference between finding h(m) such that h(m)=y for a given y and finding m' such that h(m)=h(m'). It seems that in both cases, you will first need to decide that h(m)=y, and if you have that what is the point of finding a different m' with h(m')=y? Are we just trying to find a random input that would give an output that we do not know?

Reflective: While I do not see why you would want to find the alternate m', it is a neat idea. The book indicates that hash functions are usually used for electronic signatures. This seems to indicate that for some passwords, I would be able to enter a completely different password and have it work anyway! Does limiting the size of the input (as they do with passwords) help ensure that there is only one solution m such that h(m)=y to avoid duplicate passwords?

Saturday, October 30, 2010

Sections 7.3-7.5, Due November 1

Difficult: I found the discussion of the security of ElGamal Ciphertexts to be a bit confusing. It might be from the strange way it is presented, with theoretical machines doing things that the sections just said cannot be done. Also, there are a couple of methods given to find discrete logs under certain conditions, and the chapter discusses that it is difficult to find discrete logs in general, but it doesn't really talk about how difficult. I suppose this sounds silly, but I was wondering. We talked about with the RSA algorithm how adding a couple of digits to your primes increased computation time exponentially to factor n, how does computation increase with discrete logs when adding digits?

Reflective: I wish the author had decided to include Alice's method for predicting the outcome of football games. That would come in handy I think. The way of sending the prediction so that Bob could tell she didn't alter the message and Alice could keep the message secret until she wanted to reveal it was interesting though. It seems similar to the idea of the countries wanting to share information with each being able to read the message, but not being able to alter the message that we discussed with RSA.

Friday, October 22, 2010

Sections 6.5-6.7 and 7.1, Due October 27

Difficult: The more "mathematical" definition for public key cryptosystems is more difficult to understand than the intuitive description. Of course this is usually the case, and describing how RSA fits helped alot. Other than having to think over the definition a bit, these sections did not have much to be difficult in them. Except understanding what a squeamish ossifrage is.

Reflective: I love the sections that talk about the history of cryptography the most. The RSA challenge seems to wrap up what we have talked about well, and I liked seeing the application of the method from 6.4.1. When I read 6.4.1, I thought that the method should not take too long, but this story shows that indeed it can. I found it funny that after all that work, they only had to try 4 numbers to get the factorization! Discrete Logarithms seems to be a promising chapter too. I had not heard about these before, but they seem like they should be accessible. I suppose in this method, the message will be the exponent, alpha will be the public key, and the discrete log base alpha will be the secret decryption function.

Section 6.4.1, Due October 25

Difficult: It mentions on page 185 that the easy way to pick good numbers for this method is to look at numbers of the form root{in}+j for small j, which makes sense as we are looking to cough out powers of small primes. This seems to be pretty reliable, but we have discussed how none of the current methods are sure-fire quick ways to factor a large prime. What is the trouble with this one? Obviously, we have the possibility every time of finding only multiples of n, but it seems that we could just choose more numbers to try and we will get it eventually, and it doesn't seem to take too much time to compute one more row of numbers. Does this method fail if p-1 and q-1 have no small prime factors? I don't see how that would mess things up. Is it possible that for a given n there exists no possible x and y that fit the hypothesis of the Basic Principle? It seems like for large n, there should always be such x and y.

Reflective: This method does not seem difficult or to take much time (except maybe by hand since we are working with large numbers). Clearly it must take more than I am imagining. It is interesting that they can now regularly factor numbers of more than 200 digits easily, but then we can still exponentially increase the difficulty by adding more digits so I suppose that is not terribly encouraging. I noticed we were not required to read section 6.4.2, are these methods not as useful, or not as simple?

Thursday, October 21, 2010

Section 6.4 up to 6.4.1, Due October 22

Difficult: I fail to see how the p-1 factoring algorithm is much more useful than Fermat's method. It seems you are still just picking a random number and making a wish. Also, it seems possible that, for p large enough, you can easily miss on picking a B such that p-1 divides B!. While it does also discuss HOW the algorithm works, it doesn't seem to mention much about WHY it works. If I just pick a=2, then it seems that I am hoping that the only small prime factor of p-1 is 2, since if p-1 had another small prime factor then it would not make sense for p-1 to divide a^B!. Maybe I'm just not thinking of it correctly because of the modulus?

Reflective: I'm really excited to learn about new methods of factoring numbers. Up until now, I had only know the conventional method, I had not even been aware of Fermat's method. The p-1 algorithm is confusing though. I do not see why it would work, and why you would not have more success just picking a random number. Also, does it only work on products of two primes, or can it work on products of more than two primes? I would imagine that we could make it work on products of more than one prime by picking larger bounds B to try to get the next largest prime, or simply trying to factor the next number after dividing by the first prime factor that we discover.

Friday, October 15, 2010

Section 6.3, Due October 20

Difficult: I do not understand how this fits in with what we have discussed about RSA so far. It seems that you can use the Miller-Rabin method to "break" RSA pretty readily, since n is definitely composite. Choosing a random number a, it seems that chances are high you would then be able to follow the procedure in the example on page 178 to find one of the primes p or q, and then use that knowledge to find the other. So is RSA not secure after all? I didn't fully understand the discussion of why the test works on page 179, but I think I might after reading it a few more times. It is just hard to follow.

Reflective: I didn't realize how easy it was to find primes. I recognize that neither of these tests prove primality for a number, but they seem to give a pretty sure bet. I notice that they mention strong pseudoprimes for multiple bases, in fact there is even one for all bases that are primes less than 200. Is this psuedoprime also good for nonprime bases less than 200? The statistics are given for the possibility of a number being a psuedoprime for some chosen base, and I'm not well versed in statistics, but it seems to me that picking more than one base to check and making sure some are prime and some are composite will raise the possibility of discovering if the number you are testing is a composite or not by a great deal.

Section 3.10, Due October 18

Difficult: The two methods for deciding if a number is a square mod another number (either prime or a non even composite of primes) is interesting, but does not seem very useful. As it mentions at the end, if we find that the Jacobi symbol of a mod n is +1, then we still do not really know if it is a square root, and should we decide that it is we have no way of finding what square root it is if we cannot factor n. Since the point was to have an n that is difficult to factor, this does not seem very useful. I appreciated that the section came out and said at the end that the Jacobi symbol does not give an answer to whether a is truly a square root or not. As it started that part of the section, it seemed to say that this would tell you whether a was or was not a square root mod n. In fact, even in the last example it seems to say this is true, so I might still be confused about it.

Reflective: We seem to be doing alot with prime numbers that seems to have no practical use in cryptography...yet. I'm holding out hope that we see the use of these symbols as we go on. It is nice to be able to quickly tell if something is a square root, but seems pretty useless in application given how the RSA system works. If I know it is a square root, how does that help me to break the code?

Saturday, October 9, 2010

Section 3.9, Due October 15

Difficult: I did not see how we can find p and q from this method. It seems that to discover p and q they had to have the square root of a mod n, but then the process they use to find this is to find the square root mod p and mod q, which would defeat the purpose. Otherwise, not too difficult a section.

Reflective: This was a short, simple section that taught me something new! I didn't know how to find general square roots in a finite group, hadn't even considered it. As such, I enjoyed the section quite a bit. The fact that the process was understandable helps too.

Friday, October 8, 2010

Section 6.2, Due October 13

Difficult: This was one of the more difficult sections we have done. Interesting how it followed one of the simpiler sections, and also used it! The confusion started in the first theorem, where it says that "trying the method in the theorem for various values of k<1000 will eventually lead to the factorization of n." The theorem doesn't seem to give a method, just state that the factorization is possible given the conditions. So, I suppose I do not see how they would accomplish the factoring. I also had trouble understanding the argument in the timing attack, maybe because it is based on some aspects of probability that I do not understand.

Reflective: Though I did not understand the timing attack, it seemed ingenious! Somehow it makes sense that a student would think of using how long it takes a computer to work on the problem. Probably he was doing an exercise and was late turning in his homework when he thought up the idea! I can see how it would be simple to thwart this type of attack, it was just interesting that the attack could be successful.

Section 3.12, Due October 11

Difficult: This section was rather simple. I had slight difficulty when it referred to "the quotients obtained during the computation of gcd(12345,11111)", but a quick look back in the book took care of it. Though I can see the application of this to just normal calculations (if you want to divide big numbers) I don't see the application to our class yet. Possibly this would be useful to calculate quotients when finding the inverse of a number mod a large number (such as with RSA) but since the method involves the quotients calculated, it probably would not help.

Reflective: I have never actually worked with continued fractions before, though I have seen them. I like that I now know something they are useful for as well how to construct useful ones. They seem like they would be less useful with good calculators than they might have been before.

Thursday, October 7, 2010

Section 6.1, Due October 8

Difficult: It still seems to me that making the key public like this is a bad idea, though the book makes an adequate argument about the decryption key still being secure. If Eve intercepts the encryption key (e and n), then she will be able to encode her own message and replace one intercepted from Alice. In other systems, we have discussed this as insecure, and I do not see a good way to protect against it here. Also, how is the message changed into numbers in the first place? I get that we are now numbering 01 to 26, but then the message is turned into a number by just jamming the numbers from each letter together? That seems like it could get confusing. In the example used, there is no big deal. Since there are only 26 letters, the 30 in 30120 cannot be mistaken for a 30. But what if we encrypted bav? Then we would have 20122 which could also be seen as 20 12 2, or tlb. These two groups of letters do not make words, so there is not much danger, but it seems that there should be some words that might cause problems.

Reflective: I suppose that this encryption method was not particularly feasible without computers. It says that "finding large primes is not difficult", but that is really "not difficult with a computer." Also, I'd love to see Bob calculating c^d on page 166 without a computer, even using the binary method. I'm guessing after all that work he'd be a bit miffed to discover that the message was just "cat".

Tuesday, October 5, 2010

Sections 3.6-3.7, Due October 6

Difficult: The argument for Euler's lemma seems a bit sketchy. Just reference the last theorem? I do see how it reduces to Fermat's Little Theorem when n is a prime though. Also, I do not see how a meet in the middle attack could be easily performed on the three pass protocol. It seems like, should you choose a large enough prime, then there would be too many possibly numbers with gcd(a,p-1)=1 to make the attack particularly fesable.

Reflective: I think the Three-pass protocol is an interesting method to pass encryption keys, and I think that the example of the locked box helps to remeber the method quite nicely. Also, I've never really gotten the idea of primitive roots until this section (3.7) though now I am not exactly sure what my hangup was. They seem clear enough now!

Saturday, October 2, 2010

Sections 3.4-3.5, Due October 4

Difficult: I've worked with the Chinese remainder theorem before, but I found their explanation in this book difficult to follow. It seems like to solve the system of congruences x \equiv 3 (mod 7), x \equiv 5 (mod 15) you wouldn't have to reduce mod mn at the end, and so your solution would not be "unique" so much as "lowest". I suppose it is just that it is unique mod mn, but there doesn't seem to be a particular benefit to that. I think that the application of the theorem would be more understandable if they had been more explicit with their examples. They seemed to just say "oh, so this is 80" without showing how they calculated that (except the general method they show afterwards, I'd rather see the calculation more concrete. And with smaller numbers than 12345 and 11111).

Reflective: Didn't have too much to reflect on this time, the sections were quite short. I think the trick they used for modular exponentiation was pretty clever, and I had not seen that before. Just follows the rules of exponents that people should have learned way back in algebra. I suppose it is nice to see a different application of these.

Thursday, September 30, 2010

Exam Study Questions, Due October 1

Which topics and ideas do you think are the most important out of those we have studied?

I think that the most important ideas of those we have studied are the methods for making secure encryption systems and the reasons why the systems we have studied are, or are not, secure. To throughly explain why the systems are or are not secure, you need to understand how the systems work. This is a major chunk of what we have studied up until now since you will need to know the encryptions systems, functions, and also the underlying mathematics.

What kinds of questions do you expect to see on the exam?

I expect to see questions dealing with how some encryption systems work (especially Vigenere, Hill, and DES. Probably Rajindahl also.), different encryption modes and how to diagram them (and therefore use them) and decrypt from them. I defiantly expect to see some questions dealing with modular arithmetic, both involving integers and polynomials. To a lesser degree, I expect some questions involving ring theory (but not difficult ones as that is not the focus of the class) and maybe some "encode this" or "decode this" type questions, but again not difficult ones because we will have no resources but one small calculator and a pencil.

What do you need to work on understanding better before the exam?

I need to understand DES and Raindahl better, and I need to get the modes straight (I mix up CFB and OFB sometimes, and I forget the decoding procedures.) Otherwise, I feel pretty prepared.

Tuesday, September 28, 2010

Sections 5.1-5.4, Due September 29

Difficult: I found the key schedule algorithm particularly confusing, I would very much like an example of it. Most specifically, I am having trouble with the instance when i is a multiple of 4, the other instances actually are not so bad since they are just recursive. Also, the section never really explained the selection of the matrix used for MCT. Is there something special about that one, or could we use almost any 4X4 matrix with entries in GF(2^8)?

Reflective: The description of why the MC was left out in the final round made me laugh. They took almost a whole page justifying this decision, but all it really seems to boil down to is "one less step and it looks better". Really, I don't think it needs justification. I mean, this is a created system and leaving out the final MC doesn't complicate the procedure, so why should you justify yourself?

Saturday, September 25, 2010

Class Evaluation Questions, Due September 27

How long have you spent on the homework assignments? Did lecture and the reading prepare you for them? I generally spend about two hours a week on homework assignments, though this latest one (assignment 4) is a bit more difficult. The lectures have helped me prepare for homework, mainly by showing me how to implement the methods shown in the book. The reading is excellent preparation for most of the homework. Also, though you didn't ask, the online decrypting tools have come in handy.

What has contributed most to your learning in this class thus far? Thus far I think that the reading itself has contributed the most to my learning. Lecture is helpful to round out things that I might have found a little difficult in the reading, but homework has mostly been a way to help me assess my understanding and decide what you think is important rather than a learning tool.

What do you think would help you learn more effectively or make the class better for you? I think that the current methods in class are aiding my learning just fine. The only thing I would change right now is the timing of posting reading assignments. I have missed two assignments not because I haven't been reading the book, but because they were posted at some time after I had checked on Friday, and so I did not know they were assigned for Monday. Maybe reading assignments could be posted further in advance?

Thursday, September 23, 2010

Section 3.11, Due September 24

Difficult: I've had Modern Algebra already, so most of this section is review. I did get a bit lost following the discussion of LFSR Sequences in part 3.11.3. I think this arose from mixing in the linear algebra with the polynomial group. Truthfully, I think I have it worked out now though.

Reflective: Though this was a review for me, I feel it actually cleared up some Abstract Algebra for me. Galois fields have always been a struggle for me, but this section explained them differently than I have seen before and I think it helped me to get a better handle on what is going on.

Tuesday, September 21, 2010

Section 4.5-4.8, Due September 22

Difficult: I found the Cipher Feedback mode a bit difficult to understand, and along with that the Output Feedback mode, so apparently my struggle has to do with the part that is similar between the two. I think I do not fully understand how to get X_2 from X_1. It seems like we concatenate the ciphertext from encoding the first 8 bits onto the end of X_1, and then drop the first 8 bits from X_1 to obtain X_2. Am I understanding this right? If I am, then it seems that what we are doing with OFB is concatenation of the first 8 bits of X_1 to the last 8 bits of X_1, and then dropping the first 8 bits, to get X_2. That should be a cyclic process though, so it would seem undesirable.

Interesting: Section 4.6, about breaking DES was quite interesting. I think that now that a method has been found, then it will be a better idea for there to be a new standard rather than the triple DES that seems to be what the book favors. The book says there are just many more possible keys for a triple DES, but fails to take into account that technology is still advancing rapidly. What took many computers to do in the 90's can already be done with just one now, and processing speed and memory storage is still being increased.

Thursday, September 16, 2010

Sections 2.9-2.11, due on September 17

Difficult: Wow! This was by far the most difficult section so far. If I had to pick a most difficult part, I suppose it would be section 2.11, about Linear Feedback Shift Register Sequences. I think the main problem started because I do not understand the equivalence describing the first relation. It seems to say that the n+5th term is given by adding the nth term to the n+2nd term, but then how do you find the nth term? The nth term would have to be the n+5th term minus the n+2nd term, but you haven't gotten those yet, because you are trying to get the nth term! Possibly this is the misunderstanding that carries over so that I do not understand the know plaintext attack.

Reflective: Earlier, I mentioned that I did not see how anyone could make a code that was "unbreakable". Now that the one time pad system has been explained, and I know a bit more, I can see how this would work. It seems that would be a great idea in general, but the great foil to the method seems to be the necessity of the courier to transport the one time use key. That is expensive (because you have to pay the guy) and dangerous (he could be intercepted). Other than that, it doesn't seem like it would be too hard to create a one time pad cipher. It seems to me that even with the weakness of random number generators and LFSR sequences, if you had one of these generate a key that was as long as your message and then used, say, a Vigenere style cypher, you would still not have enough patterns to mount an attack on your cyphertext.

Tuesday, September 14, 2010

Section 3.8 and 2.5-2.8, Due on September 15

Difficult: I didn't understand the point of the short section on ASCII. It seems like whether we use the "A=0, B=1, C=2...Z=25" method or ASCII to turn our letters into numbers, it is hardly different. The first method can be easily modified to include numbers 0-9, and so build any number we want, and also has been modified for various punctuation, according to the alphabet conversion applet. Maybe ASCII is useful just because not as many people are familiar with it? I also had some trouble with the decryption method for the ADFGX cypher. I would like to see that in action with an example.

Reflective: I've really enjoyed the substitution cyphers we have studied up to section 2.7, but it seems that the block cypher is indeed a better idea. The good thing about most substitutions, I think, is that they are very easy to understand. But block cyphers were not difficult to understand (with a little knowledge of linear algebra and modular arithmetic), and they are more difficult to break. It seems like a know plaintext or known cyphertext attack would be easier to prevent than a direct unknown cyphertext attack.

Section 2.3, due on September 13

Since we hadn't been assigned reading over the weekend before, I neglected to check the assignment site to see that this had been assigned. Really I had read this section earlier because I found it interesting. I'm going to blog about it despite the fact that I missed credit, just for completeness.

Difficult: When I first read this section, I found the cyphertext attack method difficult to understand. It seemed that they were only succeeding because they already knew the length of the key word. After the class discussion, I now understand how the frequency counts were used to good effect, showing how to find the length of the key word.

Reflective: I was fascinated by this cypher. It seems that it should be easily possible to modify this method so that the cypher is not so easily breakable. Possibly just giving messages in short pieces would be enough. My partner and I actually tried this as the code for our project, changing the shift cypher part to an affine cypher instead, and adding some more rules. I realize now that, should Eva know our method, then ours is only slightly harder than Vigenere's, since affine cyphers are only slightly harder than shift cyphers, but since then I've had some new ideas I'd like to see how to break based on this cypher. Maybe I'll write them up and attach them to this blog, if I have time later.

Thursday, September 9, 2010

2.1-2.2 and 2.4, Due on September 10

Difficult: The most difficult part of today's reading for me was understanding the frequency analysis counting diagram idea in section 2.4. It seemed like they were successful in decoding the message mainly because of "lucky" guessing, or probably more because they were the ones who encoded it. I am still trying to follow the logic in this section. One of the hang ups I have is, what happens when the frequency is wrong? For example, the book mentions entire novels carefully written with no e's, couldn't a secret message be purposely written the same way? This would throw off the frequency analysis. Another example I noticed of frequency analysis being wrong was in our homework (number 3). The most frequently used letter there was t, not e.

Reflective: I was very interested in the use of affine ciphers. In particular, I enjoyed the discussion of using a known plaintext attack to decode them. The text said this could be accomplished with "a bit of luck", but the procedure it outlined seemed to show that you would have to have luck to hit letters that did not help you. As the text says "The...procedure works unless we get a gcd of 13 (or 26)". This seems really specific circumstances to me. I also enjoyed seeing the use of modulus, which has always seemed a mostly useless concept, to make ciphers.

Guest Lecture, Due on September 10

Difficult: Truthfully, I had trouble finding much of difficulty in the guest lecture. The codes the Mormons used seemed pretty simple, with the exception of Larrabee's cipher used by George Q. Cannon. The method of implementation for this code confused me in class, but when I read the speaker's notes (posted on the readings assignment site) afterwards it made perfect sense. Despite the author saying the code was the "most perfect ever invented", I couldn't help but notice that this code very strongly resembles the Vigenere Cipher presented in our book. The major difference seems to be that instead of having the key word represent a vector, you have to have the list to match the keyword with the code you want. This seems to be less secure to me, if you are using the list posted in the book that Larrabee wrote, then it only becomes a problem of discovering your key word. Possibly they used their own cipher keys.

Reflective: While the codes used by early mormons were not difficult to understand how to use, it was fascinating that they even had them! I had never heard of this before, nor did I know that there was research in this field. I had to wonder how one would go about decoding the telegraph cipher where words stood for ideas. I think that an outsider would really have no hope of deciphering such a message. I had also never heard of the "Deseret Alphabet" before, nor had my native Utahn wife. Why were they trying to make a new written language? The codes and their uses interested me. I also want to find out more about George Q. Cannon now. He sounds more awesome than previously imagined.

Wednesday, September 1, 2010

Difficult: The calculations used to find the a and b such that ax+by=d will be difficult to remember, even if they are just based on the Euclidean Algorithm. I believe this works because I can see it, but I do not readily see how it works. I think I can see the connection to the discussion of using primes to generate cyphers from chapter 1, but of course I am not sure. Would it be harder to find a and b if d is large? Or maybe if a and b are large? The examples work out quickly, thought the numbers involved are not small. How can we be sure to increase (or decrease) the number of steps in the Euclidean algorithm to increase (or decrease) the complexity of this calculation?

Reflective: Still using some ring theory, though I don't think I've ever used fractions in a modulus problem before. It's pretty interesting to me how that works, just interpreting the fraction as an inverse of some other number.

Tuesday, August 31, 2010

1.1-1.2 and 3.1, due on September 1

Difficult: I find it difficult to comprehend how a code would be "unbreakable", as mentioned in section 1.2.3. It would seem that, given enough time, any code would be breakable, just like even extinct languages can sometimes be translated with much effort. I'm disappointed that the book says we will not be looking even a little more into the one-time pad system.

I'm familiar with all of the theorems in section 3.1 except for the Prime Number Theorem. I'm not sure how useful this theorem is (though it seems interesting). It seems that we only use it once here though, to get the approximate number of primes with 100 digits, so maybe a better understanding of this is not important enough for me to look it up?

Reflective: I was expecting mainly computation Linear Algebra to be important in this class, but here right at the start we are using Abstract Algebra, a little ring theory. This caught my interest and makes me even more excited to be in this class. I love to see pure subjects used for "real" tasks like this. It helps me to make connections and remember the concepts I learned in the classes. Also, the first chapter, with history and motivation for studying cryptography, actually had me reading parts out loud to my wife. For example, the story of the Sahara outpost in WWII and the different types of attacks on cryptosystems.

Introduction, due on September 1

What is your year in school and major?
I am a second semester PhD student in mathematics.

Which post-calculus math courses have you taken?
This could take a while...
Differential Equations, Linear Algebra, Intro. to Topology, Proof Structures, Combinatorics, Intro. to Abstract Algebra, Knot Theory, Topological Chemistry, Advanced Calculus, Real Analysis (two semesters), Complex Analysis, Modern Algebra (two semesters), Point Set Topology, one semester of Algebraic Topology, Multi-linear Algebra, Abstract Algebra, and Differential Geometry.

I'm currently in Real Analysis (so I can be prepared for the PhD exam at this university), Differential Topology, and Intro. to Cryptography.

I might have missed a class or two there, it's been a long time since calculus.

Why are you taking this class?
I am taking this class because I have never had the opportunity to take a class in cryptography before and it seems interesting to me. I am also exploring different ideas for what I would like to do with myself after I am finished schooling, and wanted to check out the basics of what they do at the NSA.

Do you have experience with Maple, Mathematica, SAGE, or another computer algebra system?
Yes, I am proficient (but not expert) in using Mathematica. I used Maple for my Multi-Linear Algebra class, but I did not learn it well enough to feel comfortable with it.

Programming experience? How comfortable are you with using one of these programs to complete homework assignments?
I have, and am currently, programming in Mathematica for my research. I also know some Java, but I do not think that is relevant to this class. I think I would be comfortable using Mathematica for my homework, but if there is an opportunity to learn Maple, I would welcome it.

Tell me about the math professor or teacher you have had who was the most and/or least effective. What did s/he do that worked so well/poorly?
The math professor that was most effective in my life was my calculus 1, proof structures, and knot theory teacher when I was an undergraduate. He is actually the reason that I am in mathematics today. His approach to classes was to provide us with the tools we would need (theorems, procedures, etc.) and allow us to figure out much of the math on our own. He provided interesting projects and problems for us to work on above and beyond what was found in the text book and that would help illuminate the concepts for us.

My least effective professor attempted to use the Moore method, but failed miserably. He would assign us problems to present to one another in class, with little explanation of the tools we might have at hand. He would present in class sometimes, but usually the lectures were disconnected and frequently there were mistakes in what he was working on that caused us to have to go back almost to the beginning to fix. He frequently was late or did not show up for class.

Write something interesting or unique about yourself.
I can speak Hungarian, which is pretty unique among Americans. And I suppose kind of interesting.

If you are unable to come to my office hours, what times would work for you?
Your office hours are during one of my classes. Better times for me would be from 3:00-3:50 on MWF, or any time before 12:00 PM on MWF.