## Keynes: General Theory, chapters 4-6

[Summary by Jotun Hein]

It is 35 pages but again is not easy

## Chapter 4 “The Choice of Units”

The chapter is quite imprecise but JMK has some amusing formulations like ”To say that net output today is greater, but prize level lower, than then years ago or one year ago, is a proposition of similar character to the statement that Queen Victoria was a better queen but not a happier woman than Queen Elizabeth – a proposition not without meaning and not without interest, but unsuitable for differential calculus”

## Chapter 5 “Expectation as Determining Output and Employment”

Keynes presents a very common sensical argumentation for the importance of psychology in economics. He has a distinction between short-term and long-term expectation where the former that is clear, but again very hard to measure exactly.

## Chapter 6 “The Definition of Income, Savings and Investment”

Clearly a major concern of the book is how to keep up consumption, but I what you earn but don’t consume is Saving or Investment, but I can’t see the difference between the two. I can see a difference between putting in the bank and under the madras but I am not sure it coincides with this distinction.

## Appendix on User Cost

For an outsider it is hard to understand why there is so much contention about seemingly simple concepts. And I haven’t ready anything I would call ingenious. And the lack of mathematics surprises me. And few well chosen examples would really have helped the book. Today we could easily have made a few simulations but this book came out the same year as Turing’s most famous publication and that was not an option.

This book might be much clearer upon 2nd reading if I ever goet so far.

Next time is October 13th 5.30 PM UK time and we will meet at University College, Oxford and we will discuss chapters 7-9 (The Meaning of Savings and Investment + The Propensity to Consume I+II)

## Keynes: General Theory, introduction and chapters 1-3

[Summary by Jotun Hein]

It is 37 pages in a very large font. In principle it should be easy but there was a lot of high level description of the Classical Theory that I didn’t understand too well. This seems like an ideal book for this kind of intense discussion groups and a lot of issues became clear as we went through the pages. I am sure even more would dawn upon us by reading it a send time. But it is also a very tough read where we read single sentences aloud for each other and had quite a discussion of them.

## Chapter 1

the general theory is 1 page and explains the word GENERAL and that his book is in contrast to CLASSICAL theory. I am surprised the JMK can describe it as one thing. Not that I know much about it, but there are Smith, Malthus, Richardo, Mills, Marx, Veblen, Marshall,….

## Chapter 2

The postulates of the classical theory is 19 pages and starts with 2 tenets that JMK says characterize the CLASSICAL:

“I. The wage is equal to the marginal product of labour

II. The utility of the wage when a given volume of labour is employed is equal to the marginal disutility of that amount of employment.”

We would like to have had 2 illustrations here, showing how this defined two equilibrium points. JMK asserts that CLASSICAL does not allow for involuntary unemployment, but does have frictional [due to a shift in production from say potatoes to bananas] and voluntary when a worker prefers not work since salary too low. I think JMK then show that certain empiricial consequences of I-II above [basically how the equlibrium point shifts] are generally violated.

There are lots of fun and interesting comments, like that Richardo asserted that political economy could make no statement about absolute level of production, but only about value distribution. An interesting discussion about value-wage versus money-wage. JMK uses a Prof. Pigiu to illustrate all that is wrong with CLASSICAL. JMK say the CLASSICAL are as foolish as a Euclidian Geometer that faced with non-Euclidian Geometry would yell at the lines for not be being parallel in the right way.

## Chapter 3

The principle of Effective Demand is 12 pages devoted to a preview of the whole book and the first simple equations appear. JMK defines aggregate supply and demand function defines a series of properties of the system. We were a bit tired at this point and i should read it again.

More enjoyable sentences like ” It could only live on furtively, below the surface, in the underworlds of Karl Marx, Silvio Gesell or Major Douglas.”

September 22nd time we take Book II [Definitions and Ideas] chapter 4-6 which is about 29 pages. I said we would be done by X-mas and somebody said that what the generals said in 1914 about WWI. But I hope this JMK wont last 4 years and have a similar loss of life.

## Summary: Russell Impagliazzo (1995) “A personal view of average-case complexity”

[Summary written by Jotun Hein]

This paper cannot be called a classics paper, but it was informative and opinionated and led us to the key papers in the field in no time The paper consists of two quite separate parts:

A – a classification of the computational discrete world into 5 possible countries. I found it a bit pointless and I could not see the relationship between the first 2 countries and the last 3.

**ALGORITHMICA**. Here NP=P

**HEURISTICS** here problems are polynomial averaged over all possible data sets of a given size, but intractable worst case.

**PESSILAND** [Danes might swap E & I] here there are hard average cases but no one-way (encryption) functions

**MINICRYPT** here one way functions exist, but public encryption is impossible (must mean that the function can be inverted)

**CRYPTOMANIA** and here public-key encryption is possible.

B goes through the original definition of Average Complexity by Levin. Unfortunately no real example is given on a problem where there is a difference between average and worst case complexity. The ideas are quite understandable. An algorithm can only have average polynomial complexity, if the data sets where it isn’t polynomial shrinks sufficiently fast as their size grows. One tricky thing is that complexity is not defined in terms of the algorithms at hand but only in terms of the problem. There are a lot of things not discussed here like how polynomial transformations of a problem to another skews the distribution on the possible data sets.

I think complexity based on distributions would be interesting to pursue in this case: if you generate data from the coalescent with mutation on a finite string. If you have long strings/low mutation rate you are in the perfect phylogeny domain where a linear algorithm exists. If you make mutation rate infinite, you have uniform distribution on all data sets and the worst case complexity is NP-Complete without considering the distribution.

## Summary: Shor (1999) Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

[Summary by Søren Riis]

On the morning of Wednesday 18th of May, a reduced select group met to discuss “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” P. W. Shor [1].

Finding the factors of a given large number is a fundamental problem in computing called factoring. Shor’s result is that factoring can be solved in roughly n^2 steps on a quantum computer (where n is the number of bits in the number we want to factor). In contrast, factoring requires exponentially many steps on an ordinary computer. A traditional computer is based on bits. Each bit can either be zero or one. A quantum computer instead uses q-bits. A q-bit can either take value zero, one or some superposition of zero and one.The way Shor’s algorithm works is that it first converts the problem of factoring into the problem of finding the period of a really long sequence. It is this issue that is the central quantum mechanical part of Shor’s algorithm. Once the period is found, the result can then be used to factorise the number.

Public key cryptography, i.e. the RSA algorithm we discussed in a previous meeting, relies on the assumption that factoring is computationally hard. RSA would be rendered insecure if Shor’s quantum factoring algorithm could be implemented.

After the meeting, I had some fun writing a simple python program on my laptop that can simulate Shor’s algorithm. Maybe at a later stage this program could be turned into a proper teaching tool

[1] P. W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM Rev., vol. 41, no. 2, pp. 303–332, Jan. 1999.

## Summary: Rivest, Shamir, and Alderman (1978) A method for Obtaining Digital Signatures and Public-Key Cryptosystems

On the morning of April 27., we convened to read “A method for Obtaining Digital Signatures and Public-Key Cryptosystems” by Rivest, Shamir, and Alderman [1]. The following Summary was written by Mathias C. Cornjäger:

This paper (first published in 1978) outlined the first practicable Public-Key encryption-system. This encryption-system is still in wide use today, and known by the initials of the paper’s three authors–RSA.

The paper itself is well written and readable. The first 4 sections give an introduction to the theory of “public key cryptosystems” (an already known concept at the time the paper was written), after which the authors outline their proposed implementation of such a system along with arguments why they think the system ought to be hard to break.

The goal of a “public key cryptosystem” is to establish a protocol for how to exchange private messages between two parties via insecure channels of communication without relying on shared secrets between the two parties or a trusted third party. Such protocols had already been proposed earlier by Deffie and Hellman [2], who elaborated on how a class of encryption- and decryption-functions satisfying a list of four axioms could in theory be used to facilitate private messages and digital signatures. It is worth reiterating that the main contribution this paper made was to give the first example of a class of functions which satisfied these four axioms.

An interesting detail in these first sections which the paper laid out quite clearly, was the difference between authentication and signature of messages. A message sent from Alice to Bob is authentifiable if Bob is able to satisfy himself that the message he has just received was indeed sent by Alice (or at least someone with knowledge of Alice’s private key), and that it has not been modified in transit. The same message is regarded as signed only if Bob can convince not only himself, but also a third party (say a judge should a contractual dispute arise), that the message he has received was in fact sent by Alice and that no-one (including Bob himself) could have altered/forged that signed message.

In the proposed encryption-scheme, a public-private key-pair consists of two tuples (*e*,*n*) and (*d*,*n*), where n is the product of two large primes *p* and *q*. If these numbers satisfy the conditions that

*d*is co-prime to (*p*-1)*(*q*-1), and*e***d*= 1 holds modulo (*p*-1)*(*q*-1),

it can be proven that for any non-negative integer *M* < *n* the identity ((*M*^*e*)^*d*) = ((*M*^*d*)^*e*) = *M* holds modulo *n*. Encryption then consists of exponentiating a number by *e* modulo *n*, and decryption in exponentiation by *d* (again, modulo *n*).

The proposed method for encrypting a message intended for the owner of the public-key (*e*,*n*) is then to encode a message as a number *M*<*n*, and from *M* to obtain the cipher *C* = *M*^*e* (mod *n*) (if the message is to long, it is broken up into blocks).

This cipher *C* can then be transmitted, and the intended recipient can decode it by computing *C*^*d* = (*M*^*e*)^*d*) = *M* (mod *n*). Anyone else who obtains *C* will have a hard time recovering *M*, since they don’t know *d*.

Since (*e*,*n*) is public, and *d* can be computed form (*e*, *p* , *q*) using the generalized Euclidian algorithm, the security of the cryptosystem relies on large numbers being difficult to factor (it would otherwise be easy for a third party to obtain *p* and *q* from *n*). The problem of factorizing integers is a well studied problem in number-theory, and no efficient methods of doing so have been developed in the past 300 years (and it is unknown if any exist). With this in mind, the authors argue that the system is unlikely to be broken any time soon.

The last sections of the paper briefly outline proposed algorithms on how to efficiently generate key-pairs, as well as discussions of why other methods of obtaining d from (e,n) can be conjectured to be at least has hard as factoring n.

Next meeting (May 18.) we intend to review Shor’s Paper [3] where he proposes a polynomial time factoring algorithm (on quantum computers).

References:

[1] R. L. Rivest, A. Shamir, and L. Adleman, “A method for obtaining digital signatures and public-key cryptosystems,” Commun. ACM, vol. 21, no. 2, pp. 120–126, 1978.

[2] Diffie, W., and Hellman, M. New directions in cryptography. IEEE Trans. Inform. Theory IT-22, (Nov. 1976), 644-654.

[3] P. W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM Rev., vol. 41, no. 2, pp. 303–332, Jan. 1999.