]]>

The models developed so far provide the theoretic basis for our study. We have clearly defined efficient markets and discussed a paradox in their construction, the joint hypothesis problem. We have used the language of probability theory to imbed rigor into our definition. But, we also find that much of the inspiration for these definitions arose from experience. Germinated by a broad intuition for the structure of asset returns and supported by practical experience with traders and trading, the theory is something of a hybrid: equal parts practice and principle. It was sculpted by finely-tuned economic insight, and followed statistical study or grew alongside it. Mathematical rigor was one fiber in the thread of its development.

Historically, the empirical study of the efficient market hypothesis focused on whether prices fully reflect particular subsets of information, while the unpredictability of stock market returns played a lesser role. Weaker forms of the EMH were tested first. The first subset of information considered was that of past prices, and financial economists sought to determine the validity of the *weak form* efficient market hypothesis. The next subset of information considered concerned the speed at which asset prices adjusted to news and other information as well as past prices; these studies tested the *semi-strong form* of the hypothesis. Later work considered past prices, fundamental data, news, and insider information, in an attempt to study the *strong form* efficient market hypothesis.

Today, support for the EMH is not without caveat; its intemperate defense is akin to ideology rather than social science. Substantial empirical and theoretic research has demonstrated its weaknesses. But neither has it been removed nor replaced. Revisions have strengthened it and endowed it with nuance. Modern perspectives reiterate its usefulness as a central tool in the study of financial market, while description of its failings sharpen the edges of its impression. In fact, it holds up rather well to a variety of introductory tests, as we will soon see. (We will defer criticism of these tests until later in order to preserve the chronology of the literature.)

** 1.1. Random Walks and Weak Form Tests **

While a number of tests of the efficient market hypothesis centered on the random walk of asset price returns, many of these results can, in fact, be interpreted as tests of *fair game* expected return models instead. More general, and as a result, more encompassing, this interpretation supplies a better perspective of the breadth of the EMH.

Around the 1970s, much of the empirical literature focused on the serial covariance of asset returns, in particular, for random walk models. If, in fact, asset price changes obey a random walk, then by definition, their serial covariances must be zero: each asset price change is independent of the others, hence their covariance is zero. Fair game models behave similarly.

If are random variables, their serial covariance is given by

where is a density function. Now, if is a fair game random variable, its unconditional expectation is zero, and

in other words, the serial covariance is zero. It follows immediately that the observations of a fair game variable are linearly independent.

However, the fair game model does not imply that the serial covariances of one-period returns are zero. As before, let denote the deviation from the expected return of the asset at time . That is, let

Then the serial covariance of subsequent one-period returns are given by

But (1) does **not** imply . In other words, deviation from the conditional expectation at a specific time period may be fair game, but the conditional expectation itself may be dependent on the previous period. In the stock market, this does not seem to pose a problem; empirical results demonstrate that variation introduced by minimizing this fact is trivial compared to other sources of variation. In the market for treasury bills, however, more care is required, as we will later discuss.

Empirical results confirm that the serial correlations between successive changes in the natural logarithm of price for each of the thirty stocks of the Dow Jones Industrial Average from about the end of 1957 to the end of 1962 are always close to zero.

But we soon see problems. Fama says “looking hard, one can probably find evidence of statistically ‘significant’ linear dependence [in these results].” The correlation coefficients obtained from these studies indicate that a linear relationship with laggard price changes may account for about percent of the variation in the current price change. One of course wonders what fraction of variation a non-linear model might prescribe.

Other research analyzes the success of trading rules that would profit from serial correlation in price changes. Sidney Alexander’s paper, Price Movements in Securities Markets exhibits the failure of a number of such trading systems, such as the filter. Although he finds evidence against the random walk hypothesis–the independence assumption appears unlikely–his results largely confirm the fair game efficient market hypothesis.

**Other Tests of Independence in the Random Walk Literature** The evidence demonstrates that it is preferable to consider the random walk hypothesis as a special case of the fair game model. If one-period returns are independent and identically distributed (i.i.d.), within the fair game model, then the random walk hypothesis will also hold. As a special case, however, its conditions are more stringent and its assumptions more sensitive. We expect that empirical data will violate the random walk hypothesis often, yet both its robustness and its failures will serve to expose the underlying structure of the market. In a sense, it will help delineate the concrete structure within markets from the ethereal and vague.

One departure from the pure independence assumption of the random walk hypothesis concerns the expected magnitude of daily price changes. Large price changes tend to be followed by large price changes with some regularity. The direction of these changes, however, is seemingly random. This contradicts the random walk hypothesis, but not the fair game model. It is believed that, as information enters markets, participants may overreact (or fail to react). Since subsequent changes are random in direction, this reaction has little bearing on the ultimate trajectory of prices. Interestingly enough, this is sufficient for the expected return efficient market model.

**Distributional Evidence** Previous results have established that whatever dependence seems to exist in series of historical returns, it appears virtually impossible to use this dependence to make reliable, profitable predictions about the future evolution of returns. There is little evidence that would overturn the random walk hypothesis, at least as an approximation.

At the time of the publication of Fama’s review paper, much of the current empirical research concerned the statistical distribution of price changes. Even now, understanding the distribution of returns remains important. The shape that returns take strongly influences the statistical tools available for research, the theoretic tools economists regularly use to study the structure of financial markets, and the interpretation of these results.

Louis Bachelier was the first to propose that price changes adhered to a normal distribution. He assumed that price changes from moment to moment were independent and identically distributed (i.i.d.) random variables with finite variance. If transactions are numerous, and uniformly distributed, then an application of the Central Limit Theorem implies that prices changes will be normally distributed. Subsequent studies were far less certain. Osborne, Moore and Kendall found that, while prices appeared somewhat normally distributed, the tail ends of the distribution function were larger than expected. The distribution of price changes was, in fact, fat-tailed: extreme price changes occurred far more often than the normal distribution would indicate. This particular observation, while known to a number of financial economists in the 1970’s, has gained notoriety in recent years. Fat-tailed distributions (and large deviation theory, of which it is an element), underlie much of the popular and academic discussion of stock market crashes and large-scale political events. No more bold or vociferous a writer has emerged as Nassim Nicholas Taleb. His books, The Black Swan and Fooled By Randomness, discuss these distributions with bravado.

Taleb’s oft cited mathematician, Benoit Mandelbrot, became influential among financial statisticians in the 70’s precisely for his weaker description of price distributions. He proposed that price changes obeyed a more general category of distribution, *stable class* distributions, of which the normal distributions were a subset. Unfortunately, economists have been reluctant to accept his results due to the wealth of techniques available for studying series of normally distributed random variables; the infrastructure available in probability theory itself permits the application of a variety of limit theorems and laws of large numbers. When contrasted with the “relative paucity” of techniques for non-normally distributed random variables, the reason for this preference become clear. More salient, however, is the concern that the inability of probability theorists to tame non-normal random variables may hint at a similar inability of financial economists to thoroughly tame randomness within markets.

**Fair Game Models in the Treasury Bill Market** As we saw earlier, fair game properties of the general expected return models apply to the variable

For common stock it is appropriate to assume , since is so much greater than changes in the expected value of that the latter can be safely ignored. This does not hold, however, for treasury bills. Testing the T-Bill market for efficiency will require more nuance, and an understanding of the economic theory that describes the evolution of treasury returns over time.

Richard Roll uses a variety of models in order to test for efficiency. In these models, describes the rate observed from the term structure at time for one week loans to begin at ; can be thought of as the “futures” rate of these bills. This implies

however, the random variable is observed a period earlier, at the time . Furthermore, is the “Liquidity premium” in . That is,

the one-week “futures” rate at time is equal to the expected “spot” rate at time for plus a “liquidity premium.” In Roll’s theories of term structure, we also have

Empirical tests demonstrate that is generally non-zero, and is slightly more likely to be positive, indicating that investors are paid a premium for bearing interest rate uncertainty. Moreover, under models in which the liquidity premium is non-zero, Roll’s testing indicates that the market for Treasury bills is nearly efficient; the are serially independent.

**Tests of a Multiple Security Expected Return Model** So far, our discussion has concerned so-called “single security” tests, in which we’ve sought to determine the accuracy of an individual asset’s price; we have considered whether profitable trading systems exist for single securities. Our answers have taken a variety of forms, and we have yet to explore empirical results of the other versions of the EMH. We are led immediately to another question: are portfolios of securities necessarily efficient? Are the returns of groups of securities appropriately priced in relation to each other? These questions motivated Sharpe and Lintner in their research; this work eventually became the foundation for modern portfolio theory.

Consider the expected return of the security from time to time , and suppose are the returns on a hypothetical risk-free asset and on a “market portfolio” respectively, where the later is composed of every asset in our market, weighted in proportion to their total market value. Then,

where is the standard deviation of the return on , and to covariance is as written. As written, it is clear that the model is limited to returns with finite variance, but later research has extended it to series of returns with infinite variance and stable distribution; in addition, it has been extended to multi-period settings as well.

In this model, each investor holds some combination of the risk-free asset and the market portfolio; as a result, “the risk of an individual asset can be measured by its contribution to the standard deviation of the return on the market portfolio.” In other words, the risk of an individual asset is equal to

Indeed, since

it follows immediately that the risk of an individual asset is equal to its “weight” in the sum above.

The work of Fama, Fisher, Jensen and Roll continued with this line of research, developing the so-called “market model” given by

where, much like before, is the rate of return on security at time , is the corresponding rate of return on the market index , are fixed coefficients for each security , and is a random noise term with mean equal to zero.

Econometric analysis demonstrates that the equation above is well-defined as a linear regression model, the parameters remain fairly constant over long periods of time, and and appear to be serially independent. These are the very criteria for the initially stated expected returns efficient market model. Studies find that the data agree with the market model specified in such a way that we can be fairly confident securities are efficiently priced in relation to one another. Furthermore, taking the expectation of both sides, we find

a lucid description regarding the evolution of expected returns.

Simple arithmetic, then, is enough to relate these results with studies of the Sharpe-Lintner model. Rearranging , we find

since , that is, the risk-free rate is an element of the information set at time , we note that

and

A few strong assumptions are necessary to construct the relationship. If the variance and covariance above are constant for all , then ; in fact, in this case, is the least squares estimate of . Finally, if is constant over and can be approximated by some suitable index , with

the market model and the Sharpe-Lintner models are equivalent. Further econometric studies of Sharpe-Lintner would then reiterate that not only do these models describe portfolio evolution well, but that securities are priced appropriately in relation to one another.

]]>
**Definition 2.1** *A function on an open set is meromorphic if there exists a discrete set of points such that is holomorphic on and has poles at each . Furthermore, is meromorphic in the extended complex plane if is either meromorphic or holomorphic at . In this case we say that has a pole or is holomorphic at infinity.*

By collecting results from the previous section, we are immediately led to the following proposition regarding the Laurent expansions of complex valued functions.

**Proposition 2.2** *Let be the discrete set of singularities of a complex function where is an open set in . For a fixed , suppose the Laurent expansion for in an annulus about is given by . Then, *

- The function has a removable singularity at if and only if for all .
- The function has a pole at if and only if there exists with such that for all ; that is, the Laurent expansion of about has only finitely many negative terms.
- The function has an essential singularity at if and only if the Laurent expansion of about has infinitely many negative terms.
- Furthermore, is meromorphic on the extended complex plane if and only if there exists such that for .

*Proof:* The first three results follow immediately from the previous section. For the final result, consider the function . Suppose has an essential singularity at infinity. Then, has an essential singularity at the origin and item 3 and must have infinitely many negative terms. Substituting for , it follows that must have infinitely many positive terms. The other direction of this statement is similar, and our conclusion follows. .

These results lead to an impressive description of meromorphic in the extended complex plane.

**Theorem 2.3** *Suppose is a meromorphic function in the extended complex plane. Then is a rational function.*

*Proof:* Let be the set of poles of the function . The function must be analytic in a deleted neighborhood of the origin, hence is analytic in a deleted neighborhood of ; the remaining region of the complex plane can contain only finitely many singularities, therefore must be finite. Let denote the principal part of the Laurent expansion of at the pole . In addition, let denote the principal part of the Laurent expansion of at the point at infinity. Our study of the singularities of complex functions and the Laurent expansion demonstrates that for all , the principal parts are polynomials of finite length in ; the same holds for . Furthermore, we have

is entire and bounded, hence constant. It follows that is a rational function, as desired.

A critical result in the theory, enabled in part by the Laurent expansion allows one to calculate complex line integrals with relative ease. First, we note that there is little in the development of Cauchy’s integral formula that seems to imply that a holomorphic function is defined by its integral about a *circle*. In addition, Goursat’s theorem can easily be extended to rectangles and more complex polygons as well. These results seem to imply that we can extend Cauchy’s theorem and therefore much of our development of the theory to arbitrary rectifiable closed curves. Doing so will enable us to study more challenging questions with a greater number of tools at our disposal; for example, the desired extension will allow us to analytically continue Riemann’s zeta function to the entire complex plane from the small set on which it is originally holomorphic. We begin with a description of the region enclosed by a rectifiable closed curve.

**Definition 2.4** *Given a rectifiable closed curve and a point the winding number of about is given by the formula *

* *

**Theorem 2.5** *Under the assumptions above, the following statements regarding the winding number hold: *

- The map is constant inside the connected component of containing .
- as . In other words, is the unbounded, connected component of .

* *

Before we proceed with a proof of the theorem, we note that the definition of the winding number resembles the complex logarithm. Indeed, the properties of the complex logarithm motivate the definition. Without excising a branch of the complex plane, the complex logarithm is not a well defined function. Suppose and write for some and . Then,

We can produce a holomorphic, well defined complex logarithm on by defining a *branch cut* ; that is, by removing a set (generally an appropriate path) from and restricting to . However, without doing so, the value of the logarithm differs by an integral factor of . If we begin at the point , traverse the unit circle, and return to , the complex logarithm at will differ by . Traversing the unit circle again, we find that the complex logarithm now differs by . It is with this in mind that we have proposed the above definition and prove the theorem. *Proof:*

- To simplify our argument, suppose is smooth. If it not, we subdivide into intervals disjoint invervals such that is left and right differntiable at . Our argument then proceeds with slight modification. Consider and let
Then, is continuous on and differentiable with . We hope to show that for some . To this end, we write

Therefore, such that . In other words, . Since is closed, , hence it follows that

Finally, we have which implies ; that concludes the proof of our theorem.

- We first show that is continuous inside the connection component of . This, paired with the fact that is integer valued on the connected component proves that it must be constant there. Consider both inside the connected component of . We have,
Since stays away from both , find that the denominator of the integrand is bounded, hence

where and is the length of the curve . Our assertion follows immediately.

- Suppose is an element of the unbounded connected component of and that is large. Then
must tend to zero since becomes arbitrarily large. Again, the assertion is immediate.

We are now able to prove a general version of the celebrated *residue theorem*. Our derivation of the *residue formula* and proof of the theorem will be based on the equivalence of a collection of four statements. After exhibiting these equivalences, we will prove the general *Cauchy integral theorem* for simply connected sets, which will imply the first of these statements. As a result, the residue theorem will hold on simply connected sets, and we will use it to state and prove a variety of properties of analytic functions.

**Definition 2.6** *Suppose is a meromorphic function on the set with poles at . Let *

* The coefficient is called the residue of at and is denoted *

**Theorem 2.7** *Suppose has a simple pole, (i.e. a pole of order ) at . Then *

Instead, if has a pole of order at , then,

* *

*Proof:* Both statements follow directly from the Laurent expansion of at .

**Theorem 2.8 The residue theorem** *Suppose is a rectifiable, closed curve in an open set , is holomorphic on and is meromorphic on . Then, the following statements are equivalent: *

* *

*Proof:* First, we prove that statement implies statement Let

Then, we find that

Since is holomorphic and its derivative is bounded, must be holomorphic in , hence by statement , .

On the other hand, let . Clearly is holomorphic, and statement then implies

As , it follows that statement implies .

Next, suppose . Then the function is holomorphic on the set . By statement , we find that

hence ; that is, statement implies statement .

Now we prove that statement implies statement . Often this equivalence itself is known as the Residue theorem.

]]>
**1. Introduction**

Over a series of two posts, we will discuss the consumption capital asset pricing model (CCAPM) and some of its results concerning expected asset price returns. In particular, we derive a range within which asset returns lie. The model we will be discussing describes the simplified behavior of a trader who chooses to consume a portion of his income and invest the rest in market assets over two time periods of any kind (i.e. days, months, etc.)

In this post, we describe a simplified setting of the model and derive a formula for prices arising from the marginal cost and marginal benefit of the trader’s investment decisions. This equation will then show us how prices are affected by preferences. Furthermore, the resultant equation turns out to be robust to the initial simple setting of the model. Our exposition of the model borrows from John Cochrane’s book, Asset Pricing.

**2. Investor Consumption Model**

We begin our discussion of the **Consumption Capital Asset Pricing Model (CCAPM)** by first modeling our trader with a **utility function** defined over current and future values of consumption:

To simplify our discussion, we assume that there exists only one consumption good, denoted by , and that all other asset values are quoted in terms of it. The utility function represents the “fundamental pleasure” that the investor derives from consuming the good in question. The period utility function is assumed to be increasing, concave, and continuously differentiable (i.e. smooth). That the function is increasing describes, somewhat expectedly, the desire for greater consumption rather than lesser consumption, while concavity is a result of the decling marginal value of greater consumption. In other words, our investors will be happy to consume more rather than less, but enjoy each additional unit at a decreasing rate.

The second term in (1) captures the trader’s valuation of uncertain future consumption. Here, we assume that the trader discounts his expected utility of future consumption at the rate . This captures the tendency of individuals to prefer present consumption to future consumption; the expected utility term, , represents the pleasure that the trader derives from his uncertain future consumption.

In addition, our investor is subject to two constaints. First, his or her income, consisting of an endowment, , is to be spent on either consumption or investment in their choice of available assets:

or

where is the amount of the consumption good at time ; is the endownment in terms of the consumption good at time ; is the number of purchased shares of asset at time ; and is the price of asset at time .

Second, in the subsequent period, , he or she will consume the income recieved from these investments and their endowment.

where is the payoff from their investment in asset at time . Here, we are allowing for assets to pay dividends, so that where denotes the dividend of asset payed out at time .

To summarize, our objective, or optimization problem is:

- is the trader’s
**consumption**at date - is the trader’s
**subjective discount factor** - is the trader’s
**endowment**at time - is the
**price**of asset in terms of the consumption good at time . - is the
**payoff**of asset where is a R.V. and . - is the amount of asset that the trader holds (his
**shares**).

**3. First Order Conditions and CCAMP Pricing Equation**

Since our utility function is strictly increasing, concave, and continuously differentiable, this optimization problem satisfies the Kuhn-Tucker conditions and we are ensured a unique solution to our objective problem which satisfies the first order condition (FOC):

To elucidate further, we have the following **interpretation of (3)**: is the loss in the utility if the investor buys another unit of the asset and is the gain in discounted, expected utility he obtains from the extra payoff at , . The tradercontinues to buy or sell the asset until marginal loss equals marginal gain.

Rearranging (3), we get (4). Pricing equation (4) is of striking importance: *it supplies a relationship between prices and the preferences of individual traders.*

Now, we define the **Stochastic Discount Factor (SDF)**, or **Marginal Rate of Substitution (MRS)**, as

and rewrite our pricing equation (4) as

The SDF has an intuitive economic interpretation: is the rate at which our investor is willing to substitute consumption at time with consumtion at time .

**4. Next Post: Asset Returns and Risk**

In our next post, we will discuss asset returns, and reformulate the results in this post in terms of them. We will discuss how risk affects returns, and discuss risk corrections.

]]>The basic properties of complex numbers will be assumed, allowing us to begin with the definition of a holomorphic (or complex-differentiable) function, the central notion in our study of complex analysis.

**Definition 1.1** *Suppose is an open set and . We say is holomorphic (or complex-differentiable) at if there exists We say is holomorphic on if it has this property for all .*

We can rewrite this formula in terms of the real and imaginary parts of to surmise the relationship between complex differentiability and real analytic differentiability. Let with and , and write where . Then,

We first notice that this is stronger than the differentiability of the real map in . In the real, multivariable case, the derivative of this map is a linear operator, namely, the Jacobian, ; in our equation above, the matrix on the right hand side is . Clearly, it is endowed with a distinct structure summarized in the following proposition.

**Proposition 1.2: The Cauchy-Riemann equations** *The function is holomorphic at , , with derivative if and only if the functions , where , are differentiable at and*

*both hold.*

**Remark** There exists a well-known isomorphism between and given by in which it is clear that multiplication by a complex number corresponds to a dilation and rotation via the matrix . Then the above formula indicates that, infinitesimally, a holomorphic map acts like a dilation and a rotation. In particular, if is a holomorphic map such that , then preserves the angle of curves passing through . Such a map is called a *conformal* (angle-preserving) map. We will discuss conformal maps in further detail later in these notes.

**Definition 1.3** *A parametric curve is called rectifiable if *

**Theorem 1.4** If such that is continuous, is open and is rectifiable, then

- denoted by exists.
- If then .
- Given a continuous bijection we have , noting that reversing the direction of the curve changes the sign on the right hand side of the formula above.

*Proof:* These are immediate consequences of the theory of Riemann integration in one variable and will not be reproduced here.

**Theorem 1.5 Goursat’s theorem** *Suppose is a holomorphic function from the open set to the complex numbers and is a solid, closed triangle inside . Then .*

*Proof:* We begin by giving Cauchy’s proof of Goursat’s theorem, an immediate result of Green’s theorem when is necessarily continuously differentiable. Suppose , then by Green’s theorem, we have that

An application of the Cauchy Riemann equations demonstrates that both integrals in the sum are equal to zero, hence as desired.

The classical argument by Goursat uses a stopping-time technique that is both elegant and powerful, albeit not as simple as Cauchy’s proof. On the other hand, it does not require that and is a more general result. (We will eventually learn, however, that a holomorphic function is, in fact, analytic.) To start, we note that it suffices to show that . Next, we subdivide our original triangle into four smaller triangles by bisecting each leg of the triangle and connecting the midpoints with straight lines. This yields four triangles similar to each other and our original triangle . Denote these triangles and in any order. The line integrals over these triangles cancel out over their shared legs, hence . We can then divide each new triangle into four smaller triangles by the process above and claim that To prove our claim, suppose on the contrary, that it does not hold. Then, ; moreover, . Since the are closed and bounded in , they are compact, hence . By hypothesis, is holomorphic at , so that . Then, for some ,

where we denote the perimeter of , (i.e. by and where the integral on the right hand side vanishes as both terms have primitives, or complex-antiderivatives, in . In particular, we have for sufficiently large. This yields the desired contradiction and proves our claim.

Finally, let be the collection of equivalent triangles formed after subdividing our original triangle times. Clearly, they obey the inequality for all . It follows that

as desired.

**Remark** It is essential that the triangle be contained in . Consider the function integrated along the perimeter of a triangle containing the origin. Then is holomorphic in , and . This theorem can be extended with ease to a path of any other polygon or rectifiable closed curve with one caveat: we must be able to differentiate between the inside and outside of such a curve. More precisely, the curve must be two sided. We are then free to approximate such a curve by a polygon which we can subsequently approximate with triangles.

Goursat’s theorem leads us to one of the most powerful results in complex analysis, the Cauchy integral formula. This result allows us to equate holomorphic functions on sets with integrals over the boundary of these sets. With this isomorphism, one can conclude certain regularity results, in particular, the notion that holomorphic functions are analytic. We begin by considering holomorphic functions defined on sets bounded by the simplest of closed curves: circles.

**Theorem 1.5 The Cauchy integral formula** *Fix and suppose is holomorphic in the set , then, *

* *

* where .*

*Proof:* Choose and consider the line integral of over the smaller circle . Since this function is continuous in open neighborhoods of this new circle and our original circle (that is, the annulus centered at , containing both circles), we can approximate the integral of this function over both circles arbitrarily closely by integrals over regular -gons, with sufficiently large. Since is open, there exists a simple closed path that connects these -gons. By Goursat’s theorem, the integral of over the closed paths connecting these -gons must be zero. In particular, applying Goursat’s theorem and letting implies

Next, consider and let be a path over the smaller circle. Integrating over this path, we have

This last statement is simply the average of over our smaller circle. Since is continuous throughout , it follows that as . Since was arbitrary, this concludes the proof of our theorem.

Further insight may be gleaned from another proof of the Cauchy integral formula. One attempts to exploit the regularity of the holomorphic function by writing it as an integral of its own difference quotient. In particular, one considers the integral

As above, the integrand is holomorphic away from , (that is, in an annulus centered at ). As above, one equates this integral with one about a small circle around the point . Letting the radius of this small circle tend to zero implies that the above integral is equal to zero. Finally, one computers the Cauchy integral formula directly. This method of proof alludes to a recurrent theme in complex analysis; namely the use of regularity and integral forms to conclude various results.

It may be helpful to note that we’ve used Goursat’s theorem to scale and translate contours in the complex plane without affecting the value of an integral along such a contour. We can derive more general results of this type. In fact, we will see that the statement above, with a similar proof, holds for general Jordan curves.

We now arrive at another fundamental result in the theory of complex variables: that holomorphic functions are analytic. This is a consequence of the Cauchy integral theorem.

**Corollary 1.6** *Holomorphic functions are analytic. That is, if is holomorphic on , we can express as a power series at each point . Moreover, given and , the radius of convergence of this power series at is at least .*

*Proof:* Fix and . By the Cauchy integral theorem, we have . Let us rewrite the the integrand as follows:

where ; this implies that, for each , and when , the sum above converges uniformly. Hence we can move it outside of the integral and arrive at

as desired. Finally, an application of *Hadamard’s formula* demonstrates that the radius of the disc of convergence is at least .

**Remark** Differentiation of the series immediately implies that

from which one can infer the *Cauchy estimates*:

**Corollary 1.7 Morera’s theorem** *If is continuous on an open set and for each where is a solid triangle in . Then is analytic in .*

*Proof:* Continuity and the fact that allow us to reconstruct the proof of Goursat’s theorem and arrive at its conclusion without requiring, *a priori*, that be holomorphic. In particular, the Cauchy integral theorem and analyticity follow immediately from the theorems above .

**Corollary 1.8** *Suppose is a sequence of holomorphic functions that converges uniformly to on compact subsets of . Then, is holomorphic as well. In addition as well.*

*Proof:* Suppose is compact with a solid triangle contained in E. Then, since the are holomorphic, we have

Since uniformly, is continuous and we have

which immediately implies . Since this holds for all compact subsets of , by the preceding theorem it follows that is then analytic on . To prove the final assertion we apply the Cauchy estimates. Suppose again that is a compact subset of . Then, there exists an such that for each , there exists a closed disc . It follows that is a compact subset of as well. By the Cauchy estimates we have

This assertion holds for all , so our conclusion follows as

This result is in stark contrast to situation in the case of real variables. In general, if uniformly with continuously differentiable, it is in general not true that need be differentiable. That the Cauchy integral theorem allows one to represent holomorphic functions as integrals, confers upon holomorphic functions a powerful regularity that leads to many other results. More generally, the use of integral forms plays an important role in analysis because of their regularity. Consider for example the use of the Picard existence theorem for ordinary differential equations. If , then we can conclude the existence of a solution to this differential equation by applying Picard’s iteration and existence to . We can apply the results above to yield further important results.

**Corollary 1.9 Liouville’s theorem** *Suppose is an entire holomorphic function, that is, a function holomorphic on all of , and *

* *

* for some and . Then is a polynomial with degree not exceeding .*

*Proof:* Since is entire, it can be written as a power series about the origin that converges everywhere on , and it suffices to show that . Let , by the Cauchy estimates we have

For large enough , the bound above implies

The assertion follows as .

The analytic nature of holomorphic functions leads to one of the most profound results in the basic theory of complex analysis: the notion of analytic continuity. This theorem states that if two functions, holomorphic on an open and connected set , agree on an open subset, or even on a convergent sequence of points inside , the two functions must be identical. This tells us that holomorphic functions are in some way fixed by their behavior on an arbitrarily small convergent sequence of points.

**Theorem 1.10 Analytic continuity** *Suppose is a holomorphic function with open and connected and is a sequence of points with a limit point such that vanishes on . Then .*

*Proof:* We prove the theorem by contradiction. If , there exists in a disc of radius about . Then

by continuity and by taking along the sequence . Hence in a small disc of radius about .

Next let . This set must be relatively closed in , since, if it were not, we could find such that . But this is absurd since vanishes in a neighborhood of by the above proof, hence , and is relatively closed. Now, is both open and (relatively) closed, therefore, since it is also connected and non-empty, by a standard result from point set topology, we find that , and that on all of , as we wished to prove.

**Remark** must be in . If not, we have as a counterexample to the previous theorem. In particular, is holomorphic on vanishes at with but is not identically zero.

A common application of this theorem is as follows. Suppose are holomorphic and agree on a set with an accumulation point. Then on at least one connected component. Note, however, that we must ensure that the hypotheses of this theorem are met; not only do we find a contradiction when but also when our set in question is not connected in the desired sense. For example, suppose that on the principal branch and denotes the complex logarithm of with some other branch cut that starts along the negative real axis near the origin, rises into the second quadrant, curves back down through the third quadrant and deletes the negative imaginary axis below for some . Define in addition as the unique continuous function such that when and via the implicit function theorem. Then is holomorphic. Now, and agree on the right half plane but do not agree everywhere; our difficulty in applying the previous theorem lies in the fact that the region on which they are both holomorphic is disconnected.

The previous theorem allows us, in a very general manner, to infer a great deal about a holomorphic function on an open set by studying its zeros. Such insight leads us to consider the zeros and singularities of functions holomorphic on some , where is a discrete set of singularities in . In fact, investigation of the singularities and zeros of holomorphic functions yields considerable insight into their nature; in particular one concludes the residue theorem, the open mapping principle, and the maximum principle, among others. Our study of the singularities of meromorphic functions motivates the construction and use of the extended complex plane and its geometric interpretation as the Riemann sphere. We proceed by further defining precisely zeros and singularities of meromorphic functions and provide a classification of singularities. Then we continue by providing an interpretation of the extended complex plane and conclude with the Laurent expansion of a meromorphic function.

**Definition 1.11** For any function , a *singularity* of is defined as a complex number such that is defined in a deleted neighborhood of , for an appropriate . For example, suppose on the punctured complex plane (that is, the complex plane deleted at the origin). The origin is then a singularity of the function . By setting , we can extend to the origin. Our resulting function is entire as well as continuous; we call such singularities *removable singularities.* Alternatively, we can consider the function . This function cannot be extended holomorphically at the origin since as and we say that has a *pole* at in this case. Lastly, the function on the punctured plane exhibits behavior unlike either of the previous examples. As goes to along the positive real line, approaches infinity. On the other hand, when goes to along the imaginary axis, is bounded, however oscillates rapidly; in this final case, the origin is called an *essential singularity*. These three cases exhaust all possibilities.

**Definition 1.12** A complex number is a zero of the function if . Analytic continuity demonstrates that if is the set of zeros for a function holomorphic on the elements of must be isolated. In this case, we call a discrete set.

**Theorem 1.13** *Suppose is holomorphic in the open, connected set and has as a zero and does not vanish identically in . Then there exists a neighborhood of , a non-vanishing holomorphic function and a unique, non-zero, positive integer such that *

* *

* for all .*

*Proof* The proof is immediate. Since is not identically zero in , it follows from the power series expansion , of , that there exists an and satisfying the conclusions of the theorem. Alternatively, if we suppose that has a pole at of order , applying this theorem to the function proves that we can write with and as above.

We can infer more about functions exhibiting singularities; first we note that we can decompose such functions into a holomorphic and *principal part*. In fact, these functions are holomorphic inside an annulus about their singularities. An application of Cauchy’s integral formula proves that there exists a generalization of this decomposition, the *Laurent expansion.* With this framework, properties of analytic functions, both entire functions and those with singularities can be more easily studied. First, two simple theorems demonstrates that functions with removable singularities admit a fairly straight-forward description as a result of Riemann’s removable singularity theorem. Our prior example of a function with an essential singularity exhibited erratic behavior near it; we find that this is true in general, as the Casorati-Weierstrass theorem demonstrates.

**Theorem 1.14** *Suppose is an open neighborhood of , without loss of generality, and suppose is holomorphic on obeying *

*on with and . Then, there exists a holomorphic function and coefficients such that*

* The sum above is called the principal part of . When , this result is called the Riemann removable singularity theorem.*

*Proof:* First consider the case . We find that is bounded on . Consider the function . As a result of our boundedness hypothesis, and exist, and it follows immediately that is holomorphic on . In addition, vanishes to at least second order at zero. An application of the previous theorem implies is holomorphic on all of , from which our assertion follows. Clearly, the singularity at the origin must be removable. Now, if , note that ; an application of our prior result to implies

for some function holomorphic on . Expanding in its power series about and dividing by on both sides concludes the proof of our theorem.

One should note that this result is essentially equivalent to the conclusion of the theorem proceeding it. In particular, if is a meromorphic function with a pole at of order , then, can be decomposed into the sum of its principal part and a holomorphic function. Equivalently, .

**Theorem 1.15 Casorati-Weierstrass** *Suppose is holomorphic in with an essential singularity at . Then is dense in .*

*Proof:* Rather, suppose is not dense in . Then there exists and such that . Therefore, if we let

we find that on . By the Riemann removable singularity theorem, must be holomorphic on ; that is, must be holomorphic at or exhibit a pole at , both of which contradict the nature of the singularity at . Hence, must be dense in .

Theorem 1.14 allows one to decompose a function exhibiting a pole singularity into an analytic and principal part. This is true in general, as the Laurent expansion allows one to expand any complex function into a convergent analytic and principal part, the latter with a possibly infinite number of terms. First, we require a lemma. It suffices during the following discussion to consider without loss of generality, functions analytic on an annulus about the origin.

**Lemma 1.16** *Let with be an annulus and suppose is analytic. Furthermore, suppose such that . Then, *

* *

* *

*Proof:* The lemma is an immediate result of the Cauchy integral formula and was used in its derivation.

**Proposition 1.17 Laurent expansion** *Any complex function that is analytic on an annulus admits as a decomposition *

* *

* with and analytic, where and are open discs of radius and , centered at the origin. If vanishes at the origin, then the expansion is unique. In the latter case, is called the principal part of the Laurent expansion of . *

*Proof:* We first address the notion of uniqueness. If has two different Laurent expansions, their difference is the Laurent expansion for the zero function, hence it suffices to prove uniqueness for the expansion of the zero function. Suppose . This function is analytic on all of and is clearly bounded, hence, by Liouville’s theorem, is constant. This implies that and are both identically zero, thus guaranteeing that the Laurent expansion of the zero function is unique.

As we discussed earlier, the exploitation of integral forms plays a significant role in analysis. Our construction mimics the derivation of the power series expansion for analytic functions. In particular, we exploit the regularity conferred to by its analyticity and write as the primitive of a function much like its own derivative, in other words, an integral form. Provided the necessary assumptions are met, one can then study by manipulating its integral form.

Fix such that and let be defined by

This function is analytic on and has a removable singularity at by the Riemann removable singularity theorem. From this and our prior lemma we find that

In other words, we have

Because lies outside the small circle the function is on open neighborhoods containing this circle. Therefore, we have

On the other hand, the function is not analytic on open neighborhoods containing the large circle hence,

from which it follows that

This is our desired Laurent expansion. We find that

where and,

where . These functions are both analytic, hence we can write them as power series. Therefore,

and

Finally, if we denote , we conclude

In order to continue with a thorough treatment of complex functions and their singularities, we introduce the *extended complex plane*, denoted henceforth by and defined by appending the point at infinity to . More precisely, for large values of , we consider the function and identify the behavior of at infinity by the corresponding behavior of at the origin. For example, if is holomorphic for large , then is holomorphic in a deleted neighborhood of the origin. Similarly, the function has a pole at infinity if has a pole at the origin; other behavior of at infinity is defined synonymously.

The extended complex plane admits a powerful geometric description which we consider next.

**Definition 1.18 The extended complex plane the Riemann sphere and stereographic projections**

Let and identify the -plane with . Furthermore, let be the unit -sphere centered at the origin. Denote by its top most point , that is, . Then, for any point , we can draw a straight line connecting and intersecting the -plane at a point , which we write, in the notation of complex analysis as . Alternatively, given any point on the -plane, say, , there exists a unique point along the vector . Intuitively, we can see that such a correspondence must be bijective. In particular, it is clear that there is a bijective correspondence between and . Now suppose , that is, let the point go to infinity on the -plane. The point then moves arbitrarily close to , and we find that there exists a correspondence between the point at infinity in and the north pole of the unit sphere, . Hence we can identify with the -sphere, otherwise known as the *Riemann sphere*. The addition of the point at infinity to is called the *one-point compactification* of since it takes to the compact set by adding one point. In particular, the stereographic projection is given by the map

with inverse

A few fairly simple algebraic (or geometric) manipulations demonstrates that these maps are conformal, assuring us of the validity of our prior discussion. With this construction in mind, we turn to the study of meromorphic functions in our subsequent notes.

]]>

“A study of economics usually reveals that the best time to buy anything is last year.”

*-Marty Allen
*

**1. Introduction **

A thorough study of asset price bubbles and market crashes necessitates the study of the efficient market hypothesis, one of the most controversial theories in modern finance and economics. In the wake of the 2008-2009 financial crisis, the efficient market hypothesis (EMH) has faced both opprobrium and found defense. In various editorials and journal articles, economists have criticized the financial community for what they argue was unreasonable and nearsighted adherence to the pronouncements of the hypothesis (Thaler, 2008). Others, however, hold that government interference, specifically its attempts to manage inefficiencies, have created the conditions necessary to promote price bubbles (Thompson, 2006).

In an ideal capital market, prices incorporate all available information necessary for the proper allocation of resources. However, the existence of asset bubbles and crashes may suggest that the allocation of resources is often improper. Indeed much of the discussion about the current financial crisis implies that efficient markets preclude asset price bubbles. However, definitions of the EMH vary and are implicitly based on general equilibrium principles that may be consistent with asset price bubbles. Therefore one must determine the extent to which the forms of the efficient market hypothesis preclude various assertions about bubbles. Of related importance are the relationships between financial efficiency, loosely defined as the absence of arbitrage opportunities, and other standards of efficiency (e.g. informational and welfare efficiency). The latter are the frameworks of normative economics, and our discussion of how resources *should be* allocated.

The literature of efficient markets is replete with theory and empirical study. Empirical work constituted much of the early development of the field. As early as 1863, Jules Regnault was aware that stock price deviations were proportional to the square root of time. Eventually a theoretic treatment developed and the EMH was defined rigorously.

In this survey, we review Eugene Fama’s seminal 1970 paper, its discussions of influential papers, and recent developments in efficient market literature related to the study of asset price bubbles and crashes. We begin by reviewing the various definitions of the efficient market hypothesis and by briefly discussing the *joint hypothesis problem*, a challenge in the empirical study. Then, we discuss various tests of these and more complex models. Finally, we discuss the empirical results of these tests, paying particular attention to papers relevant to the study of asset price bubbles and crashes.

**2. Definitions and Models **

The efficient market hypothesis has often been defined in literary terms. The first discussion of efficient markets arose in the Frenchman Louis Bachelier’s work in 1900, in which he stated “the mathematical expectation of the speculator is zero.” John Maynard Keynes elaborated on this concept, a consequence of the EMH, in 1923, arguing that investors in financial markets are compensated not because of superior knowledge, but because of a willingness to bare greater risk than other investors. Later, Alfred Cowles conducted empirical research demonstrating that, on average, financial professionals do not perform better than the market. Further econometric evidence seemed to support the notion that assets in the American stock market were properly priced. Through the implementation of time series analysis, Maurice Kendall concluded that the change in stock prices could be modeled by a symmetric random walk. Preceeding the work of Samuelson and Fama, Paul Cootner provided an explanation for this behavior by arguing that investors’ beliefs about the state of asset prices would affect those same prices to the extent that the expectation of “tomorrow’s price, given today’s price, is today’s price” (Cootner 1964).

This piecemeal construction of a financial theme culminated in the work of Eugene Fama and Paul Samuelson in 1965 and 1970. Samuelson, by applying probability theory and other tools of modern economics, proved that properly anticipated prices fluctuate randomly. This was a critical element in the development of the modern theory. At around the same time, Fama defined the EMH in his seminal 1970 survey by stating: “A market in which prices ‘fully reflect’ available information is called ‘efficient'” (Fama, 1970). In other words, Fama defined an efficient market as one whose agents operate under rational expectation and incorporate all available information to price assets properly. He also mentions three types of efficient markets based on the information sets incorporated into asset prices.

Definition 1Suppose is an information set. Then a market is called

Weak formefficient if is equal to the set of prices of an asset,Semi-strong formefficient if is equal to the set of all publicly available information, andStrong formefficient if includes all public and private information.

This statement became the starting point for the efficient market literature, yet, over time, many other definitions were posited. One of the most important variations of the efficient market hypothesis was stated by Michael Jensen in his 1978 paper, *“Some Anomalous Evidence Regarding Market Efficiency.”* Jensen defines an efficient market as one in which “Prices reflect information to the point where the marginal benefits of acting on information do not exceed the marginal costs” (Jensen, 1978). We will primarily use this version of the EMH because Jensen chooses to include the manner in which information is incorporated into prices as well as the extent to which that incorporation occurs. In general, the field of behavioral economics strives to understand this issue as well and its conclusions will also be critical in our study.

Before continuing, we note a structural problem in the study of efficiency. Market efficiency *per se* is, as suggested earlier, a vague notion that requires greater precision to permit empirical study. Any attempt to examine its nature requires that we prescribe some type of additional structure. It follows that any empirical test of market efficiency also tests this structure; this is known as the *joint hypothesis problem*. This also implies that the efficient market hypothesis itself is not a well defined hypothesis. One can test only the specific models chosen, not the general description of efficiency. If we are to determine the degree of efficiency in real markets, it becomes necessary to define what it means for a real market to “properly” reflect all “relevant” information. We must specify an equilibrium model of efficiency, investor behavior and information structure.

Suppose is the statement “market prices properly reflect all relevant information,” the literary definition of efficiency. Let be the statement “the words ‘properly’ and ‘relevant’ are described by model .” The *joint hypothesis problem* states that it is difficult, if not impossible, to test without testing for some hypothesis dependent on model . Clearly, results in which the efficient market hypothesis is implicit are dependent on some model, ; any discussion of asset bubbles will therefore be dependent on the choice of model .

**Expected Return Models**

Early studies were based on the assumption that market equilibrium could be described through the use of expected return models. These models suppose that the equilibrium expected return, conditional upon the information set, is some function of the risk inherent in a specific asset or transaction. For a general expected return model, one states

where is the expected value operator, is the price of asset at time , is the information set at time , and is the return percentage at time , i.e. , or, in other words, . Finally, the tildes above indicate that these elements are random variables. It is important to note that the expected value is simply one of many descriptions of the distribution of returns, hence any empirical studies that implement this type of expected return model assume that expected value is the most valid description. The notion that empirical study assumes a surprisingly deep validity of a specific model is a theme we will see repeated.

Definition 2Under the assumption that market equilibrium can be described by expected returns and that expected returns are based on the information set at time , trading systems that operate based on must have expected returns equal to equilibrium expected returns. It follows that if is the sequence of excess returns, that is,

we must have

In this case, we say the sequence isfair gamewith respect to the sequence of information sets . We can equivalently replace the prices above with percentage returns, .

We can formalize even further. Let be a sequence of suggested transactions by trading system where denotes the amount and type of transaction of asset . In addition, let be the excess value produced by the trading system at time . We find that

An application of the fair game hypothesis demonstrates that the expectation of this value is

**The Submartingale Model**

Definition 3Suppose that

or equivalently for . Then the sequence of prices follows asubmartingalewith respect to the information sequence . If above relationship is equality, we say the sequence of prices follows amartingale.

The submartingale model leads to an important empirical result. Consider a market with one asset, . If the submartingale model holds, then there exists no trading strategy better than that of buying asset and holding it. Although a market with only one asset is unreasonably simplistic, the model provides us with an important starting point. Empirical study will often use this test to conclude efficiency of a real market if no asset in the market obeys the property above unless in the case of equality. Asset price bubbles may function conversely.

**The Random Walk Model**

The random walk model comprised most of the early literature of the efficient market hypothesis. Empirical research suggested that successive one-period returns were random, and independent of the information set at time ; this model was more general than the fair game model proposed above.

Definition 4Suppose is a probability density function on a probability space . The sequence of price changes, or returns, follows arandom walk (with drift)if

that is, if successive price changes are independent of the information set at time , and are identically distributed and if contains the history of all previous returns. Note that we say “with drift” becausepricesmay not follow a random walk; successive changes are independent of each other and are independently distributed but the distribution of price changes may depend on the price level.

Generally speaking, empirical evidence suggesting such a model is somewhat weak. Instead, we use a more general model,

which says that the expectation of returns is independent of the information set .

While the fair game hypothesis proposed earlier assumes that market equilibrium can be described by expected returns of an asset without describing the stochastic process through which this occurs. The random walk model, on the other hand, makes a much stronger assumption: it proposes a distribution function and assumes that the entire distribution of returns is independent.

With these models in hand, we have developed the basic framework necessary to discuss empirical studies of the efficient market hypothesis. While one may easily determine sufficient conditions for the hypothesis, necessary conditions are usually more difficult to determine and are often debated. If all the information in the set is freely available, the market is free of transaction costs, and all participants come to the same conclusions with the information in , it seems evident that fair game market efficiency will hold. Fama’s results, as well as those of many others, seem to indicate that even in the presence of transaction costs, moderately costly information, and a variety of opinions based on , as long as there exist sufficient numbers of investors with access to information, markets may be efficient. In particular, efficiency is possible as long as there are no individuals that consistently “make better evaluations of available information than are implicit in prices.” (Fama, 1970)

Investor inconsistency, transaction costs, and unavailable information may all be sources of market inefficiency; studying their impact, as well as the influence of other conditions, on the development of prices is the primary goal in the empirical literature. Reassessment of the original models has, of course, accompanied the progression of the field. In some cases, the definition of an efficient market was modified to incorporate a potential source of inefficiency. Other results demonstrate that efficiency is impossible under certain conditions. Hence, it will be necessary to determine the conditions under which asset price bubbles may form, and deduce which, if any, are inconsistent with an appropriate model of market efficiency.

]]>“I can calculate the motions of the heavenly bodies, but not the madness of people.”

— Sir Isaac Newton after losing £20,000 in the South Sea Bubble

Bubbles and their subsequent crashes have confounded historians, economists, financiers and the general populous throughout history. Examples, often categorized as bubbles, include Tulip Mania, the South Seas bubble, the Dot Com bubble, and the recent housing bubble. The importance of bubbles and crashes cannot be overlooked, and the housing bubble is a prime example: as a consequence of the crash, global GDP (the cumulative GDP of every country) was severely affected. Much of the literature in macroeconomics ignored the consequences of bubbles by ignoring financial intermediation and associated frictions. In light of the recent crises, the literature is now shifting toward an approach that brings together financial economics, monetary economics, and standard macroeconomic techniques.

An asset bubble is formed when an asset’s price is significantly different from its fundamental value, also known as its intrinsic value. In practice it is sometimes calculated as the discounted sum of expected future income; however, this may not be a good estimate to the actual fundamental value. In order to calculate the true fundamental value, we must first construct a general equilibrium model such as Milgrom & Stokey 1982 or Tirole 1982. This construction will be discussed in more detail in some future posts. Unfortunately, the theoretical definition of bubbles is not easily applied since the fundamental value is difficult or impossible to observe. For this reason, some alternative operational definitions have been proposed. Jeremy J. Siegal, proposed an operational definition of a bubble as “any time the realized asset return over a given future period is more than two standard deviations from its expected return” (Siegel 2003). Though this definition implicitly implies that asset returns are normally distributed, or Gaussian, so that virtually all returns should lie in the range of two standard deviations from the mean. Notable authors such as Benoît Mandelbrot and Nassim Taleb have argued against using normal distributions for assets since fat tails are often observed in many assets. Along these lines, Didier Sornette offers another operational definition; bubbles are “caused by the slow build-up of long-range correlations leading to a global cooperative behavior of the market and eventually ending in a collapse in a short critical time interval.” (Sornette 2001). Note that this definition concerns asset market bubbles which occur, in theory, when an underlying process inflates all of the assets in that particular market above their fundamental value. We will discuss this definition, whether it coincides with the theoretical definition, and Sornette’s work further in future posts.

In order to address the existence of asset bubbles, as theoretically defined, we must first examine the fundamental properties of prices. More specifically, we must discuss the validity of the Efficient Markets Hypothesis. The definition provided by Fama (Fama 1990): “I take the market efficiency hypothesis to be the simple statement that security prices fully reflect all available information. A precondition for this strong version of the hypothesis is that information and trading costs, the costs of getting prices to reflect information, are always 0 (Grossman and Stiglitz (1980)). A weaker and more economically sensible version of the efficiency hypothesis says that prices reflect information to the point where the marginal benefits of acting on information (i.e. the potential profits to be made) do not exceed marginal costs (Jensen (1978)).” We will take Jensen’s more sensible definition and additionally consider the the three different forms of the efficient markets hypothesis: weak, semi-strong, and strong. Each form is designated by the specification of the information that is reflected by prices. A rigorous definition of the EMH and its forms will be given in the next installment of this series. Moreover, we will discuss whether any of the forms of the EMH permit the existence of bubbles as theoretically defined.

Once we have discussed the efficient markets hypothesis, we will then tackle the problem of defining the notion of fundamental values and bubbles in the theoretical frameworks of general equilibrium models such as Milgrom and Stokey (1982), Postlewaite et al., and Tirole (1982). In addition, we will also look at the fundamental formation of prices in divisible good markets through various continuous auction formats with the eventual goal of analyzing bubbles in those formats.

In our future posts, we will review and analyze the following papers:

- “Efficient Capital Markets: A Review of Theory and Empirical Work” by EF Fama – Journal of Finance, 1970
- “Efficient Capital Markets: II” by EF Fama – The Journal of Finance, 1991.
- “Efficient Capital Markets and Martingales” by SF Leroy – Journal of Economic Literature, 1989.
- “Information, Trade and Common Knowledge” by P Milgrom and M Stokey.
- “On the possibility of speculation under rational expectations” by J Tirole – Econometrica, 1982.
- “Finite bubbles with short sale constraints and asymmetric information” by F Allen, S Morris, and A Postlewaite – Journal of Economic Theory.