On discrete time infinite horizon optimal growth problem

Dynamic economic theory has been developed via the use of optimal control problems, especially in the context of optimal growth models with infinite planning horizons. On one hand, from the pure economic perspective, optimal growth models serve as one of the best tools in explaining the capital accumulation. On the other hand, from the mathematical viewpoint, optimal growth problem itself can be identified as an interesting dynamic optimization problem. Therefore, while the assumptions of the model describe and shape the economic framework, they also determine the layers of the mathematical difficulty of the problem. Several approaches have been developed in the literature to solve the optimal growth problem. In some of these approaches, one needs to make several strong assumptions in order to address seemingly technically difficult problems. In other approaches, in order to understand the economic implications if a certain assumption fails to hold, one looks for a new mathematical framework. This paper provides a comprehensive review of four distinct approaches in solving discrete time infinite horizon optimal growth problem: (i) passing to the limit approach; (ii) dynamic programming, (iii) Lagrange multiplier method for infinite horizon and (iv) Pontryagin’s approach. It is important to note that these distinct approaches involve different mathematical arguments. In each approach covered in this paper, we attempt to provide the difficulties in obtaining the solution and outline the possible ways to avoid these difficulties. We also provide a comparative discussion about the assumptions of the optimal growth model. Furthermore, we review the different techniques through some relevant examples.


Introduction
Dynamic economic theory has been developed via the use of optimal control problems, especially in the context of optimal growth models with infinite planning horizons.On one hand, from the pure economic perspective, optimal growth models serve as one of the best tools in explaining the capital accumulation.On the other hand, from the mathematical viewpoint, optimal growth problem itself can be identified as an interesting dynamic optimization problem.Therefore, while the assumptions of the model describe and shape the economic framework, they also determine the layers of the mathematical difficulty of the problem.Several approaches have been developed in the literature to solve the optimal growth problem.In some of these approaches, one needs to make several strong assumptions in order to address seemingly technically difficult problems.In other approaches, in order to understand the economic implications if a certain assumption fails to hold, one looks for a new mathematical framework.
This paper provides a comprehensive review of four distinct approaches in solving discrete time infinite horizon optimal growth problem: (i) passing to the limit approach; (ii) dynamic programming, (iii) Lagrange multiplier method for infinite horizon and (iv) Pontryagin's approach.It is important to note that these distinct approaches involve different mathematical arguments.In each approach covered in this paper, we attempt to provide the difficulties in obtaining the solution and outline the possible ways to avoid these difficulties.We also provide a comparative discussion about the assumptions of the optimal growth model.Furthermore, we review the different techniques through some relevant examples.
In this paper, we consider an economy that faces a resource allocation problem.The main elements of the given economic model are initial endowment, production function and the preferences.In this economy, we suppose that there are infinite periods and there exists a single household (or consumer) who consumes a single good at each period.A simple production function is assumed where the good is produced from one input, that *Corresponding Author is capital.The output is either consumed or saved as capital to the next period.The consumption or saving decision with respect to budget constraint is the only allocation decision that the economy must make.The output is consumed with respect to the preferences of the consumer which are represented by a utility function.The intertemporal utility is defined as the discounted sum of the single period utilities where the discount factor is between 0 and 1 which reflects the property of additive separability.We then suppose that the discrete time infinite horizon additively separable optimal growth model involves a benevolent social planner who maximizes the intertemporal utility subject to the constraints of production possibilities and consumption-saving activity.Based upon the earlier literature, the approach of passing to the limit has been the first one that is utilized in solving the above mentioned problem.It is natural to start with the finite horizon leading to a finite dimensional constrained optimization problem.Here, we should first address the following question: is the limit of the finite horizon problem the unique solution to the infinite horizon problem?To this end, one should note in such a case that we typically face the difficulty in establishing the legitimacy of interchanging the maximum operator and the limit operator.Therefore, for the most of the relevant cases, the answer is negative to the above question.Dynamic programming has been another important approach that is widely used in solving this type of economic optimization problem.It basically reformulates the actual problem by breaking into sub-decision problems.In doing this, optimum decisions are derived sequentially which leads to a sequence of value functions.This well known method was first studied by R. Bellman in 1957, in [1].Later, this technique has been applied to dynamic models in economics with a principal reference being Stokey, Lucas and Prescott (1989) ( [2]).Dating from Lucas and Stokey (1984) ( [3]) and [2], important contributions have been made in the literature to apply dynamic programming techniques to analyze infinite horizon optimal growth problems in different models generating more general results.Le Van and Morhaim (2002) ( [4]) provides a unified approach covering bounded and unbounded returns, and Kamihigashi (2014) ( [5]) is a resource for a summary of the results in the literature for dealing with unbounded cases, to be a generalization of [2] without making topological assumptions in the additive separable case.For a generalization to models with non-additive and recursive preferences via aggregating function (aggregator) (that include additive separable models), one can refer to [2], for dynamics to [3] and for recent general settings and results dealing with bounded and unbounded returns to Bich et al.(2017) ( [6]).Although, dynamic programming is a very efficient way in order to solve the infinite horizon optimal growth problem, there has been a tendency in the literature to return back to Lagrange multipliers method.However, under such a case, Lagrange multipliers would belong to an infinite dimensional decision space.Thus, the question becomes whether it is possible to derive the sufficient conditions for a Karush-Kuhn-Tucker type theorem to hold in the infinite case.This question has been studied in the literature since Bewley (1972) ( [7]) for the general case.Dechert (1982) ( [8]) provides an explanation of the structure of the problem in details for the Banach spaces in general.To this end, he uses the functional analysis not only to tackle this problem, but also to demonstrate the sources of the difficulty in switching from a finite problem to an infinite dimensional problem.Multiplier sequence has a nice representation if the space is reflexive and the generalization can be done without facing any problem.The question becomes: what if the space is non-reflexive such as ℓ 1 ?In fact, multipliers lie in ℓ 1 in the optimal growth problem. 1 [8] shows that the existence of these multipliers is guaranteed only by the Axiom of Choice.There may be no other constructive way to calculate these multipliers.Le Van and Saglam (2004) ( [9]) extends the work [8] to the set-up where objective and constraint functions do not need to be real-valued in order to cover the cases where Inada type conditions are assumed.[9] also discusses some other interesting applications of this method in economics.
In the classical optimal growth model, when written as an equivalent minimization program, the objective and the constraint functions are scalar valued and supposed to be convex.There have been some works in the existing literature relaxing the assumption of convexity in order to obtain results in non-convex cases.As an example, Rustichini (1998) ( [10]) studied the general optimization problem using non-convex models.The questions of whether the separating vectors do exist and they can be represented by a sequence of real numbers in infinite dimensional spaces have 1 Here, we denote by ℓ 1 the space of real sequences a = (at)t such that ∞ t=0 |at| is convergent in R. Note that endowed with the norm ||a||1 = ∞ t=0 |at|, ℓ 1 is Banach but not reflexive since (ℓ Here, we denote by ℓ ∞ the space of bounded sequences a = (at) such that sup t |at| ≤ ∞.
been addressed in [10].Moreover, vector optimization problems on Banach spaces without convexity assumptions have also been considered in Dutta and Tammer (2006) ( [11]).Here, it is important to note that an additional assumption stating that the objective function is locally Lipschitz was necessary in [11].However, in this context, using the approach of Pontryagin's principle, Blot and Chebbi (2000) ( [12]), Blot and Hayek (2008) ( [13]), Blot and Hayek (2014) ( [14]) and Blot et al. (2015) ( [?]) give useful results without restrictive assumptions.[13], [14] and [?] consider dynamic systems which are governed by difference equations and difference inequations respectively.In all of these cited works, a vector valued problem is considered, that is, the states and the controls are vector valued.Moreover, these works use weaker convexity assumptions than the usual ones to obtain strong Pontryagin's principles and they provide weak Pontryagin's principles without convexity conditions.The solution approach in [12] is based on using reductions to finite horizon problems.However in [13], [14] and [?], the authors consider the problem in the space of bounded sequences, which allows them to use functional analyctic approach which is based on the use of abstract results of the optimization theory and optimal control problems in ordered Banach spaces.In the spirit of the Karush-Kuhn-Tucker theorem, they establish the necessary and sufficient conditions in the form of weak Pontryagin's principles.This result can be used for different kinds of optimal control problems that can be found in economics, optimal management of renewable resources, sustainable development theory and game theory.
In this paper, besides studying the two classical approaches (passing to limit approach and dynamic programming) including very recent extensions and developments, we aim to apply the most recent two functional analytic approaches for solving optimal growth problem: Lagrange multiplier method for infinite horizon and the approach of weak Pontryagin's principle.Lagrange multiplier method for infinite horizon is due to [8] and based on extending Lagrange multiplier method to infinite dimensional space.In some sense, this approach can be seen as an extension of the passing to the limit approach and also serves as an alternative method to the dynamic programming.We give sufficient conditions on the objective and constraint functions under which the Lagrange multiplier can be represented by a ℓ 1 sequence.We assume Inada conditions as in [9].In economics, Lagrange multiplier method has been the key tool for obtaining the solution in optimization problems in economics and Lagrange multipliers provide meaningful insights in the economic models.Therefore, it is useful not only for providing solution to the problem but also analyzing the nature of the solution.The idea of the approach of weak Pontryagin's principle is to transform the optimal control problem into a dynamical system.A solution to the discrete time optimal growth problem is given as a special case of the results obtained in [14].The result is useful as the assumptions are easy to check.To compare these two functional analytic approaches, we have to note that in Lagrange multiplier method we need the concavity assumptions of one period utility and the production functions but one can avoid convexity conditions to obtain weak Pontryagin's principle.Furthermore, in the approach of Pontryagin's principle vector states and vector controls are used hence encompasses in this sense also the Lagrange multiplier method.The rest of the paper is organized as follows.In Section 2, we describe the set-up of the optimal growth problem.Section 3 gives the mathematical background of the classical approaches for the optimal growth problem together with the recent developments.Then, in Section 4, the functional analytic approach is studied.Section 5 concludes.

One-sector optimal growth model: set-up
This section presents the set-up of deterministic discrete time infinite horizon one-sector optimal growth model.We consider an economy as a problem of resource allocation.The primitives of the model are initial endowment, production function and the preferences.
We consider an economy E of infinite periods from time t = 0 to t = ∞.We suppose that there is 1 unit of time each period.There is a single household2 who consumes a single good at each period.This good (output) is produced from one input, capital.At time t = 0, the amount of capital is supposed to be k 0 units.The output is produced from capital by a production function f : R + → R + .
In each period t, we suppose the single good (output) with quantity y t ∈ R + which is produced from one input: k t by a production function f where y t = f (k t ).The output is either consumed as c t ≥ 0 or saved as capital to the next period as k t+1 satisfying the following process being repeated until infinity: The consumption level is determined according to the unique consumer's preferences which is defined by a one period non-decreasing utility (reward) function u : R + → R. The intertemporal utility is then defined as follows: where 0 < β < 1 is the discount factor.

Social planner's problem
We first give some definitions in order to describe the problem.
for all t, we say that it is a feasible accumulation path from k 0 .
Definition 3. The set of feasible allocation from k 0 is denoted by The objective of a benevolent social planner is to maximize the utility of household by choosing the feasible allocation (k, c), that is, subject to the feasibility constraints with a given positive initial capital.The problem can be written as follows: The objective function states that social planner must only decide the consumption level at each period in order to maximize the utility.The constraints reflect that non-consumed, i.e., saved amount of output will be added to the capital of the next period and hence will determine the future production levels.Furthermore, since the temporal utility function u t is non-decreasing, at optimum, output will not be wasted so that the consumption at t will be equal to the difference of output and quantity saved, that is c t = f (k t ) − k t+1 .Eliminating c t from the problem (P ) gives us a new formulation ( P ) :

Classical approaches of solution
In this section, we discuss the two classical approaches of solution to the optimal growth problem, namely passing to the limit and dynamic programming.We give their mathematical arguments with respect to the assumptions of the model and provide some examples.

Assumptions
In the following, we give a list of assumptions of the model.In Section 3.2, the entire list will prove to be useful, in Section 3.3, one may assume the weaker versions of the ones in this list.
(1) Assumption (EA) is quite standard.We assume that, at the beginning, we have some positive capital.
(2) By the assumption Prod (1-3), we suppose that the production function is strictly concave, continuously differentiable in R + and strictly increasing.These assumptions can be weakened to the degree that one can overcome the mathematical difficulty.In assumption Prod (4), we assume that the production function satisfies the asymptotic conditions, called also Inada conditions, which will guarantee the existence of interior solutions for the optimization problem.Since lim k→0 f ′ (k) = +∞ and f ′ (∞) < 1, there is a maximum feasible level of capital, which we can call k max .This condition is satisfied when for example the production function is of socalled Cobb-Douglas type, i.e. f (k) = Ak α with A constant in R + and 0 < α < 1. (3) By assumption Pref(1) , we assume that the one period utility function is bounded.Pref (2-4) are the analogous versions of Prod (2)(3)(4).According to Pref (5)-Inada conditions, the marginal utility of consumption for a starving agent would be so high and the marginal utility for a satiated consumer would be so low.We assume by Pref ( 6) that the preferences over intertemporal consumption sequence take the additively separable form.

Passing to the limit
We are interested in the infinite horizon case.Nevertheless, it was logic to start with the finite horizon.The approach of passing to the limit has naturally been the first one for solving this problem.The problem was then a finite dimensional constrained optimization problem.In economics, the method of Lagrange has been widely applied for solving finite dimensional constrained optimization problems.That is, under the assumption of the model that we cited in Section 3.1, namely (EA), Prod(1-4) and Pref(1-6), there exist Lagrange multipliers so that the solution to the constrained maximization problem is also an extreme value of the objective function of the social planner without constraints.The set of sequences {k t+1 } T t=0 satisfying the constraints of the problem is a closed, bounded3 and convex subset of R T +1 and the objective function is continuous (as the sum of the continuous functions) and strictly concave by the assumptions Pref(2) and Pref(3).Hence, there is exactly one solution which is characterized by Karush-Kuhn-Tucker conditions.By the Assumptions f (0) = 0 in Prod(4) and u ′ (0) = ∞ in Pref(5), it is clear that the constraints do not bind except for k T +1 = 0. Thus, the solution satisfies the first order and the boundary conditions for all t = 1, . . .T : These conditions give us a 2nd order difference equation in k t which has a 2-parameter family of solutions but the one which satisfies the boundary conditions is the unique solution.
Here, the question turns out to be whether the limit of the finite horizon problem is the unique solution to the infinite horizon problem.The answer is positive for some parametric examples in economics, for instance, as in the following example.However, this method involves in general one difficulty that to establish the legitimacy of interchanging the operators max with lim T →∞ , to guarantee that max lim This difficulty is overcome if the uniform convergence of the solution path is satisfied.However, this will bring restrictive assumptions on the model.Instead, different approaches are developed by which not only the problem is solved but also with weaker assumptions of the model.
Example 1.Consider a logarithmic utility function u(c t ) = ln c t and Cobb-Douglas production function: f (k t ) = (k t ) α with 0 < α < 1.Thus, the optimal growth problem will be: By the help of the above equations ( 1) and ( 2), one can check that the unique solution to the corresponding problem ( P ) is: Passing to the limit, we find that k t+1 = αβk α t is the unique solution for the infinite horizon problem.
Remark 2. Note that the assumption of boundedness of the utility function is not satisfied in the previous example.Boundedness is needed in order to guarantee the existence of optimal solution though a solution can exist without it as in the previous example.

Dynamic programming
Dynamic programming has been another useful approach for solving the optimal growth problem.[2] is the principal reference for the use of this method in optimal growth problem.In this section, after giving the idea of the approach and the Principle of Optimality, we will give an overview of the results in the literature according to the assumptions of the model.The first three subsections deal with the additively separable optimal growth problem.Section 3.3.4discusses nonadditive model.
The idea of dynamic programming is to divide the problem up into separate sub-problems.The first step is to define and solve the problem of the initial period and then to proceed forward.
The problem at the initial period that the social planner faces is to choose current period's consumption c 0 and capital to begin with for the next period k 1 .If we knew the preferences of planner over (k 1 , c 0 ), we could simply maximize the appropriate function of (k 1 , c 0 ) over the opportunity set defined by the constraint: Suppose that the above problem is solved for all possible values of k 0 .Then, we could define a function v : R + → R by taking v(k 0 ) to be the value of the maximized objective function, for each k 0 ≥ 0: With v so defined, v(k 1 ) would give the utility from period 1 and that could be obtained with k 1 .βv(k 1 ) would be then the value of this utility discounted back to period 0.
In terms of this value function v, the planner's problem in period t = 0 would be the following optimal growth program: v is unknown at this point.Thus, solving the above program provides also v.That is, v must satisfy: Irrespective of the date, we can rewrite the problem of planner with current capital stock denoted by z, y ∈ R + as a functional equation (equation in the unknown function of v): The study of dynamic optimization problems through the analysis of such functional equation is called dynamic programming.
We can view the above equation ( 4) (called also Bellman equation) through a functional operator (Bellman operator) : solutions of (4) being fixed points of T .
The idea is then to study the link between the value function of the optimal growth program with the solutions of Bellman equation.That is, to study the link between the value function of the optimal growth program with the fixed points of the Bellman operator.Thus, one has to verify the following issues: (i) (Existence) Existence of a fixed point of Bellman operator is obtained as the value function of the optimal growth program is a fixed point of T .v(z) (which is the unknown of the Bellman equation) satisfies (T v)(z) = v(z).Existence is guaranteed by some sufficient conditions via a Banachtype Fixed Point Theorem and Berge's Maximum Theorem.
(ii) (U niqueness) Studying a fixed point of T allows us to reach the value function of the optimal growth program, if uniqueness of such a fixed point is obtained, then the (unique) fixed point is the value function.
(iii) (Reachability) Bellman operator gives an algorithm to reach (under appropriate conditions) the value function of the optimal growth program.
As in some problems the suitable starting points to reach the value function must be restricted.By means of iterating on the Bellman operator will provide the convergence to this value function from any "initial suitable feasible guess".
The following theorem gives the details of the Bellman's Principle of Optimality whose idea is given above and show that the dynamic programming technique allows to recover the value function of the optimal growth problem.
Theorem 1. (Principle of Optimality) The solution v to the Bellman equation (4) evaluated at z = k 0 gives the maximum in optimal growth program (3) when the initial state is k 0 .Moreover a sequence {k t+1 } ∞ t=0 attains the maximum in (3) if and only if it satisfies for all t ≥ 0: The Principle of Optimality is verified under a series of topological assumptions for the bounded case as well as for two important particular cases: with bounded returns and with unbounded returns (see Chapter 4 of [2]).The following sections give the versions of these results for our setting.

Optimal growth with bounded utility
In this section, we consider the optimal growth problem under the assumptions of the model given in Section 3.1.
Thus, v * (k 0 ) is the maximum in (3).It is natural to seek the solutions to (4) in bounded continuous functions.Any bounded continuous solution to (4) Moreover, given a solution to (4), for any k 0 , a sequence {k * t } attains the maximum in (3) if and only if it is generated by the following mechanism where 0 instead of (4), we can write v = T v.As the feasibility condition is given as a closed interval [0, f (z)] together with the convexity of R + and the boundedness and the continuity assumptions given in Prod(1) and Pref(1-3), T has a unique fixed point in the space of bounded continuous functions.This fixed point is the value function v * .
(3) By the assumptions Prod(1-3), Pref(2-4), v is stricly increasing, strictly concave and continuously differentiable.If {v n } is a sequence of approximations defined by v n = T n v 0 with an appropriate choice of bounded contionous starting function v 0 , then this value iteration converges uniformly to the value function v * .
Remark 3. We have to mention especially that u is not supposed to be bounded.However, note that under the assumptions P ref (1 − 3) and P rod(1 − 3) the function G which is defined as is bounded.Thus, the case is called optimal growth with bounded returns.
Theorem 3.Under the assumptions (EA), Prod (1 − 4 ) and Pref (1 − 6 ), (1) solutions to the functional equation (3) and sequence plans (4) coincide exactly, (2) the Bellman operator has a unique fixed point in the space of bounded contionous functions and this fixed point is the value function v, (3) value iteration converges uniformly to the value function starting from any bounded continuous function. Proof.
(1) By Remark 3, G is bounded.Thus, if B is a bound for G(z, y), then the maximum Thus, v * (k 0 ) is the maximum in (3).Any bounded continuous solution to (4) satisfies Moreover, given a solution to (4), for any k 0 , a sequence {k * t } attains the maximum in (3) if and only if it is generated by the following mechanism where 0 (2) and (3) here are essentially analogous versions (2) and (3) in Theorem 2. It suffices to remark that v is stricly increasing, strictly concave and continuously differentiable as G(., y) is so by means of the assumptions P rod(2 − 4) and P ref (2 − 4).

Optimal growth with unbounded returns
In economics, the utility function are often unbounded from above and/or below.In [2], this case is partly considered and called optimal growth with unbounded returns.That is, it is the case where the maximum function v * satisfies the Bellman equation ( 4) but the following boundedness assumption is not satisfied: In this case, the problem is that the functional equation ( 4) would give many solutions.The sufficient conditions for a solution to equation ( 4) to be the maximum function v * are given in Theorem 4.14 in [2].The idea is to guess a solution to the equation ( 4) and start with an appropriate function v that is an upper bound for v * and then iterarate down to the fixed point of T .We will discuss these sufficient conditions in the following two examples which are used quite often in economics.These examples will prove to be useful for our comparative study.Nevertheless, in the literature, there has been an extensive research in order to give a general setting for dealing with the unbounded case.One can refer to Le Van and Morhaim (2002) ( [4]) which provides a unified approach covering bounded and unbounded utilities.The recent reference Kamihigashi (2014) ( [5]) is intended to be a resource for a summary of the results in the literature for dealing with such unbounded cases, to be a generalization of [2] without making topological assumptions.Unlike the former ones, in [5], instead of a Banach-type Fixed Point Theorem, Knaster-Tarski Fixed Point Theorem is used to show the existence of a fixed point of the Bellman operator.
Example 2. We consider the same problem of Example 1 and we solve it by dynamic programming.One can overcome the difficulty due to the unboundedness of the utility by choosing a specific functional form as an upper bound.The problem corresponding to( 4) is then: The sufficient condition of having a unique solution is to find a bound function v(z) for the maximum function v * : , ∀z > 0 We may take v(z) = α ln z With T defined as follows: (T w)(z) = max 0≤y≤z α {ln[f (z) − y] + βw(y)} By some calculations, one can show that the following v(z) is the fixed point of T : that the optimal sequence is generated as follows: t for all t = 0, 1 . . .Example 3. (Cake Eating Problem ) In this example, suppose that one consumer has a cake of a given initial size of k 0 .In each period, the consumer eats some part of the cake with respect to its preferences and save the remainder satisfying k t+1 = k t − c t for all t = 0, 1 . ... Suppose that the consumer's preferences are represented by the utility function u(c t ) = ln c t .Hence, finding the optimal path of consumption of the cake can be interpreted as solving the following optimal growth problem with linear production function f (z) = z for all z ∈ R + : We can solve (P ) by dynamic programming.We choose here again a specific functional form as an upper bound.We proceed as follows: Since ln k t ≤ ln k 0 for all t = 1, 2 . .., we will have: 1−β ln k 0 .With T defined by: The first order conditions of the right hand side of the above equation gives us y = βz and therefore we have: By the iteration, we will have: Defining v(z) = lim n T n v(z), and taking the limit of above equation will give us the fixed point of T , that is: , first order condition of the right hand side of this equation gives us the following optimal sequence: k t+1 = βk t for all t = 0, 1 . . .

Non-additive optimal growth problem
In this paper, we have so far considered an additively separable model which is in fact satisfied by the assumptions P ref (6) and P ref (6).In this section we will consider the non-additive model via recursive preferences and aggregating functions which are due to Lucas and Stokey (1984) ( [3]).

4.
The utility function u is recursive if u(c) = u(c 0 , c 1 , . . ., c n , . ..) is a function A(c 0 , u(c 1 , . . ., c n , . ..)) of today's consumption c 0 and the intertemporal utility from tomorrow.The function A aggregates the today's consumption c 0 and future utility into the current utility and is called an aggregating function (aggregator).
Definition 5.The aggregating function A : R + × R + → R has the following properties: The class of utility functions that are considered are then defined by u The following theorem describes the source and the properties of this class according to the aggregating function.In such a model, dynamic programming approach can be applied with recursive preferences that have a contraction property.
Theorem 4. Let S be the vector space of all bounded (with the norm ||u|| ∞ = sup c∈ℓ ∞ + |u(c)|) and continuous functions such that u : ℓ ∞ + → R. Let A satisfy AI, AII, AIII and AIV and let T A be an operator defined as Then, T A has a unique fixed point u A in S.Moreover, if A is increasing and concave then u A is increasing and concave.
Proof.By the definition of T A and by the property (AIV ), T A is a contraction.Hence existence of a unique fixed point holds by Banach Fixed Point Theorem as S is complete.Moreover, A is increasing as T A takes increasing functions to increasing functions.T A is a contraction then T A u is concave if u ∈ S is concave.Thus, the unique fixed point u A is concave.

Corollary 1. (Additive Recursive Preferences)
Let T A be an operator defined by T : S → S and

Functional analytic approach
In this section, we give two different functional analytic approaches to solve our particular problem defined in Section 2. In Section 4.1, we apply the main result of [8] tracking the lines of [9] which is indeed the Lagrange multiplier method for optimal growth.In Section 4.3 we apply the approach of weak Pontryagin's principle due to [14] to our problem.We then compare these results with respect to the assumptions of the model.

Lagrange multiplier method for infinite dimensional space
The aim of this section to set the optimal growth problem (P ) given in Section 2 as a minimization problem ( P ) and showing that all the conditions of the Main Theorem in [8] are fulfilled for the optimal growth problem ( P ).
Then ( P ) will be: Remark that with the above settings the problem ( P ) is equivalent to the optimal growth problem (P ).
(1) We suppose bounded sequences of allocations by (AE)(2).( 2) One can mention that the above list is indeed weaker than the list in Section 3.1: The boundedness of the utility function is dropped.Neither utility nor the production function are supposed to be stricly concave.Instead, concavity and differentiability will be adequate.However, we make an asymptotic assumption of production function which satisfies f ′ (0) > 1.
assumption will be essential for this technique of Lagrange multipliers method (see Example 5 in the following) while it was not essential in the approach of dynamic programming (see also Example 2).
(3) By (P ref )(4) we assume additive separable utility, however we refer to [9] for the extension to the recursive preferences.
Proposition 1.Under the Assumptions (EA), that the following conditions hold: Proof.Under the assumptions since u and f are concave then F and Φ are convex.Since f ′ (0) > 1, for any k 0 > 0, there exists k ′ such that 0 . .), c 0 = (ǫ, ǫ, ǫ, . ..) and x 0 = (k 0 , c 0 ).. Note that sup t Φ t (c 0 ) < 0. Thus Slater's condition4 is satisfied.Under the assumptions made above, in order to be able to apply the result argued in [8] to the space ℓ 1 one needs a key result is the following identification: Rudin (1973) in [16]) For each λ ∈ (ℓ ∞ ) ′ + we adopt the notation λ = λ 1 + λ s where λ 1 ∈ ℓ 1 + and λ s ∈ ℓ s + .The sufficient conditions so that λ s = 0 are given by two additional assumptions in [8].These assumptions are verified with above setting under the assumptions (EA), P rod and P ref for our problem (see [9]).Hence, the conditions of the Main Theorem in [8] are fulfilled for the optimal growth problem.There exists thus λ ∈ ℓ 1 + such that for all This leads us to the final result with the above settings of Φ, F which establishes the extension of Lagrange Multiplier Method with Karush-Kuhn-Tucker conditions.
Corollary 2. The Lagrange multipliers sequence associated to this optimal growth problem is the sequence {β t u ′ (c * t )} and satisfies the so-called Euler equation: ) for all t = 0, 1, . . .Corollary 3. Let the assumptions of the Proposition 1 be satisfied for an optimal growth problem.Moreover, suppose that u is strictly concave and continuously differentiable with u ′ (0) = +∞.If x * = (c * , k * ) is an optimal solution, then the sequence {β t u ′ (c * t )} is in ℓ 1 + /{0}.Let us consider the optimal growth problem with logarithmic utility and Cobb-Douglas production solved in Example 1 and in Example 2 by two different methods.The following example will be the third way of having the solution and will directly generate the Lagrange multipliers: Example 4. The assumptions of the Corollary 3 are all satisfied, that is, u(c t ) = ln c t , therefore it is strictly increasing, continuously differentiable and u ′ (0) = +∞, we obtain the sequence t > 0 and k * t > 0, by the equations ( 8) and ( 9), we have λ 2 t = λ 3 t = 0 for every t.Let us define, c t = c * t for every t, k t = k * t for every t = T and c T = c * T + ǫ such that c * T + ǫ > 0. By means of equation ( 1), we will have: For all ǫ sufficiently small, we have thus: Remark 6.An alternative proof of obtaining the sequence of {β t u ′ (c * t )} in ℓ 1 + /{0} is due to Dana and Le Van (2003) ( [17]).Under the assumptions (EA), P rod and P ref , it is shown in [17] that there exists a unique optimal sequence x * = (c * , k * ) verifying that c * > 0 and k * > 0. Moreover the sequence k * is monotonic and x * = (c * , k * ) satisfies Euler equation which is used to prove the existence of the sequence {β t u ′ (c * t )} in ℓ 1 + /{0}.This sequence is interpreted as the prices p * of the corresponding competitive equilibrium The assumption of f ′ (0) > 1 (which is called as Interiority Assumption in [17]) is essential to have multipliers in ℓ 1 + .Without this assumption, that is if f ′ (0) ≤ 1, the multipliers are not necessarily in ℓ 1 + as in the following example.
Example 5. Let us reconsider the Cake Eating Problem.Remember that we consider a linear production function f (z) = z for all z ∈ R + .Hence f ′ (0) = 1 and the condition f ′ (0) > 1 is not satisfied.Suppose that we have multipliers in ℓ 1 + .By the help of the Inada conditions and Euler equation we will have: ) for all t = 0, 1, . . .equivalently Hence, a solution cannot be given to the Cake Eating Problem by means of this approach.

Approach of Pontryagin's principle
In this section, we apply Theorem 3.1 and Theorem 5.1 of [15] 5 to our optimal growth problem defined by scalar state and control variables.These theorems establish weak Pontryagin's principles as necessary and sufficient conditions of optimality.The idea of this approach is to transform the optimal growth problem to a dynamical system by the help of weak Pontryagin's principles.This approach is also functional analytic and based on the use of abstract results of optimization theory in the space ℓ ∞ in the spirit of the Karush-Kuhn-Tucker theorem.The aim of this section is to set the optimal growth problem (P ) as an optimal control problem (P ) and to show that the necessary conditions given by Theorem 3.1 of [15] and sufficient conditions given by Theorem 5.1 of [15] are fulfilled for ( P ).Set x = (k, c) ∈ ℓ ∞ ×ℓ ∞ and g(k t , c t ) := f (k t )−c t for all t = 0, 1, . . .where k t ∈ R + is the scalar state variable and c t ∈ R + is the scalar control variable.The dynamic system is governed by the following difference inequation (DI): (DI) k t+1 ≤ g(k t , c t ) for all t = 0, 1, . . .Then (P ) will be: Remark that with the above settings two problems (P ) and (P ) are equivalent.Note that the Pontryagin's Hamiltonian function associated to (P ) and the multipliers 1 and λ is defined by Proposition 2. Let the following assumptions be satisfied: + is an optimal solution of (P ) then is a solution of the following system: f (k t ) = c t + k t+1 for all t = 0, 1, 2 . . . .
Conversly, under P rod and P ref , let the above (10) and (11) be fulfilled for a feasible allocation + such that the Pontryagin's Hamiltonian function, associated to ( P ) and the multipliers 1 and λ, H t (k t , c t , 1, λ) is concave with respect to (k t , c t ) for all t = 0, . ... Then x * = (k * , c * ) is an optimal solution of ( P ).
Proof.Since u is independent of k t and supposed to be continuously differentiable and since f is continuously differentiable then so is g : R × R → R.Under the assumptions P rod and P ref , the assumptions 6 of Theorem 3.1 in [15] are verifed, therefore, we can directly use its conclusion.There exists then a sequence of multipliers λ * ∈ ℓ 1 + such that the following conditions, which 5 These are also Theorem 3.3 and Theorem 3.8 of [14]. 6Essentially the Assumption (H1) in [15].Note that Assumption (H4) is always satisfied in our case since ∂g ∂c (kt, ct) = −1 = 0 for all t = 0, 1 . . .are so-called Adjoint Equation (AE) , Weak Maximum Principle (W M P ) and Complementary Slackness (CS), hold: ) that give us the following system: λ * t+1 (−1) + β t u ′ (c * t ) = 0 for all t = 0, 1, . . .( 16) From the equations ( 15) and ( 16), the system reduces to: ) is concave with respect to (k t , c t ) then optimality holds.

Remark 7.
(1) Assumption and Inada conditions are fulfilled by the statement of the Proposition 2 as the sequence x * = (k * , c * ) supposed to be a feasible allocation sequence in intℓ ∞ + × intℓ ∞ + .
(2) result is useful as the assumptions are easy to check and one may avoid the concavity assumptions of u and f .However the concavity of the Hamiltonian is needed for the sufficient conditions of the optimality.
Example 6.A solution to the problem in Example 1 can be given by the approach of Pontryagin's principle.u(c t ) = ln c t , f (k t ) = (k t ) α with 0 < α < 1 are continuously differentiable on R + .

Conclusion
The optimal growth problem and its solution require advanced dynamic optimization techniques.
In this paper, we analyze four of them in a discrete time infinite horizon framework.Besides the two classical approaches, namely passing to the limit approach and dynamic programming, we study two functional analytic approaches.The first of them serves as the extension of Lagrangian method to infinite dimensional spaces by emphasizing the works [8] and [9].The second one transforms the optimal growth problem to a dynamical system by the help of weak Pontryagin's principles.While studying each of these approaches, we discuss the potential difficulties in obtaining the solution and point out possible ways to avoid these difficulties.Under each case, we provide a discussion about the assumptions of the model and review the techniques through some relevant examples.Optimal growth model typically involves several assumptions on both the production and consumption sides (mainly on preferences).In general, the analysis of the specific assumptions of a model in economic theory is crucial in order to encompass the most interesting cases in the applications of the theoretical models.Some of these assumptions are needed purely for the mathematical reasons, that is, in order to be able to solve the optimization problem.Specifically, we always need some restrictive assumptions on the objective and constraint functions such as concavity, differentiability, monotonocity, boundedness and asymptotic assumptions.Once these assumptions are made and the mathematical framework is established, the solution can be given.Then, from the economic viewpoint, additional efforts are put forward in weakening some of the restrictive assumptions.This paper provides a comparative analysis of different mathematical approaches based on a specific list of assumptions within the given economic model.First, for the passing to the limit approach to work in optimal growth model, we point out that it is necessary to be able to the limit and maximum operators.This is satisfied only if the solution path sequence is uniformly convergent.Therefore, its economic applicability is limited.Then, we study the dynamic programming technique in the same context and find that it leads to a solution that enables us to consider a larger set of economic examples.To make this point more clear, note that utility functions are often assumed to be unbounded in economics and thus the boundedness assumption needed in the passing to the limit approach is too restrictive while this assumption can be avoided in dynamic programming.We overview important contributions in the literature to apply dynamic programming techniques to analyze infinite horizon optimal growth problems with unbounded returns and with non-additive and recursive preferences via aggregating functions.We finally show that a solution to the optimal growth problem can be obtained under weaker assumptions on production and preferences by the two functional analytic approaches relative to the previous two techniques.To be more specific, in Lagrange multiplier method, unlike the classical approaches, neither the utility nor the production function is supposed to be stricly concave and continuously differentiable.Instead, concavity and differentiability are adequate.Here, we should emphasize that an additional assumption is made on the asymptotic behavior of the production function which satisfies f ′ (0) > 1.We show that this assumption here is essential while it is not essential in the approach of dynamic programming.The approach of weak Pontryagin's principle is useful as the assumptions are fewer and easy to check.To compare these two functional analytic approaches, we have to note that in Lagrange multiplier method we need the concavity assumptions of u and f but in the approach of weak Pontryagin's principle we do not need.However, note that the concavity of the Hamiltonian is needed for the sufficient conditions of the optimality.This paper, by its comparative set-up, can be seen as a source for the researchers who intend to use these approaches in similar types of accumulation and growth problems.

Theorem 2 .( 1 )
Under the assumptions (EA), Prod (1 − 4 ) and Pref (1 − 6 ), solutions to the functional equation(3) and sequence plans (4) coincide exactly, (2) the Bellman operator has a unique fixed point in the space of bounded contionous functions and this fixed point is the value function v, (3) value iteration converges uniformly to the value function starting from any bounded continuous function.Proof.(3)As u is supposed to be bounded by the assumption Pref(1) and 0 < β < 1 by Pref(6), then Π(k 0 ) = ∅ and lim n→∞ n t=0 β t u[f (k t ) − k t+1 ] exists for all k 0 ∈ R + .The maximum function v * is then bounded and satisfies: