diff --git a/lectures/aiyagari.md b/lectures/aiyagari.md index 362e922..66e3363 100644 --- a/lectures/aiyagari.md +++ b/lectures/aiyagari.md @@ -196,7 +196,7 @@ In reading the code, the following information will be helpful * `R` needs to be a matrix where `R[s, a]` is the reward at state `s` under action `a`. * `Q` needs to be a three-dimensional array where `Q[s, a, s']` is the probability of transitioning to state `s'` when the current state is `s` and the current action is `a`. -(A more detailed discussion of `DiscreteDP` is available in the [Discrete State Dynamic Programming](https://python-advanced.quantecon.org/discrete_dp.html) lecture in the [Advanced +(A more detailed discussion of `DiscreteDP` is available in the {doc}`Discrete State Dynamic Programming ` lecture in the [Advanced Quantitative Economics with Python](https://python-advanced.quantecon.org) lecture series.) Here we take the state to be $s_t := (a_t, z_t)$, where $a_t$ is assets and $z_t$ is the shock. diff --git a/lectures/cass_koopmans_1.md b/lectures/cass_koopmans_1.md index 90a5ab3..8386cc0 100644 --- a/lectures/cass_koopmans_1.md +++ b/lectures/cass_koopmans_1.md @@ -30,7 +30,7 @@ This lecture and {doc}`Cass-Koopmans Competitive Equilibrium ` and David Cass {cite}`Cass` used to analyze optimal growth. The model can be viewed as an extension of the model of Robert Solow -described in [an earlier lecture](https://python-programming.quantecon.org/python_oop.html) +described in {doc}`an earlier lecture ` but adapted to make the saving rate be a choice. (Solow assumed a constant saving rate determined outside the model.) diff --git a/lectures/cass_koopmans_2.md b/lectures/cass_koopmans_2.md index 5c35f5a..9f6e3cb 100644 --- a/lectures/cass_koopmans_2.md +++ b/lectures/cass_koopmans_2.md @@ -49,8 +49,8 @@ The present lecture uses additional ideas including problem and the Hicks-Arrow prices. - A **Big** $K$ **, little** $k$ trick widely used in macroeconomic dynamics. - * We shall encounter this trick in [this lecture](https://python.quantecon.org/rational_expectations.html) - and also in [this lecture](https://python-advanced.quantecon.org/dyn_stack.html). + * We shall encounter this trick in {doc}`this lecture ` + and also in {doc}`this lecture `. - A non-stochastic version of a theory of the **term structure of interest rates**. - An intimate connection between two @@ -424,8 +424,8 @@ price system. ```{note} This allocation will constitute the **Big** $K$ to be in the present instance of the **Big** $K$ **, little** $k$ trick -that we'll apply to a competitive equilibrium in the spirit of [this lecture](https://python.quantecon.org/rational_expectations.html) -and [this lecture](https://python-advanced.quantecon.org/dyn_stack.html). +that we'll apply to a competitive equilibrium in the spirit of {doc}`this lecture ` +and {doc}`this lecture `. ``` In particular, we shall use the following procedure: diff --git a/lectures/matsuyama.md b/lectures/matsuyama.md index 73af4b0..681e424 100644 --- a/lectures/matsuyama.md +++ b/lectures/matsuyama.md @@ -336,7 +336,7 @@ conditions is not trivial. In order to make our code fast, we will use just in time compiled functions that will get called and handled by our class. -These are the `@jit` statements that you see below (review [this lecture](https://python-programming.quantecon.org/numba.html) if you don't recall how to use JIT compilation). +These are the `@jit` statements that you see below (review {doc}`this lecture ` if you don't recall how to use JIT compilation). Here's the main body of code diff --git a/lectures/rational_expectations.md b/lectures/rational_expectations.md index 4e118a6..f04c7f9 100644 --- a/lectures/rational_expectations.md +++ b/lectures/rational_expectations.md @@ -591,8 +591,8 @@ If there were a unit measure of identical competitive firms all behaving accord :class: dropdown ``` -To map a problem into a [discounted optimal linear control -problem](https://python.quantecon.org/lqcontrol.html), we need to define +To map a problem into a {doc}`discounted optimal linear control +problem `, we need to define - state vector $x_t$ and control vector $u_t$ - matrices $A, B, Q, R$ that define preferences and the law of diff --git a/lectures/troubleshooting.md b/lectures/troubleshooting.md index e68f030..60b999a 100644 --- a/lectures/troubleshooting.md +++ b/lectures/troubleshooting.md @@ -33,7 +33,7 @@ The basic assumption of the lectures is that code in a lecture should execute wh 1. it is executed in a Jupyter notebook and 1. the notebook is running on a machine with the latest version of Anaconda Python. -You have installed Anaconda, haven't you, following the instructions in [this lecture](https://python-programming.quantecon.org/getting_started.html)? +You have installed Anaconda, haven't you, following the instructions in {doc}`this lecture `? Assuming that you have, the most common source of problems for our readers is that their Anaconda distribution is not up to date. diff --git a/lectures/two_auctions.md b/lectures/two_auctions.md index e06c7bb..6e5ab82 100644 --- a/lectures/two_auctions.md +++ b/lectures/two_auctions.md @@ -13,7 +13,7 @@ kernelspec: # First-Price and Second-Price Auctions -This lecture is designed to set the stage for a subsequent lecture about [Multiple Good Allocation Mechanisms](https://python.quantecon.org/house_auction.html) +This lecture is designed to set the stage for a subsequent lecture about {doc}`Multiple Good Allocation Mechanisms ` In that lecture, a planner or auctioneer simultaneously allocates several goods to set of people.