Dynamic Programming of Economic Decisions, Softcover reprint of the original 1st ed. 1968
Ökonometrie und Unternehmensforschung Econometrics and Operations Research Series, Vol. 9

Author:

Language: English

Approximative price 84.35 €

Subject to availability at the publisher.

Add to cartAdd to cart
Dynamic Programming of Economic Decisions
Publication date:

52.74 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Dynamic Programming of Economic Decisions
Publication date:
144 p. · 15.5x23.5 cm · Paperback
Dynamic Programming is the analysis of multistage decision in the sequential mode. It is now widely recognized as a tool of great versatility and power, and is applied to an increasing extent in all phases of economic analysis, operations research, technology, and also in mathematical theory itself. In economics and operations research its impact may someday rival that of linear programming. The importance of this field is made apparent through a growing number of publications. Foremost among these is the pioneering work of Bellman. It was he who originated the basic ideas, formulated the principle of optimality, recognized its power, coined the terminology, and developed many of the present applications. Since then mathe­ maticians, statisticians, operations researchers, and economists have come in, laying more rigorous foundations [KARLIN, BLACKWELL], and developing in depth such application as to the control of stochastic processes [HoWARD, JEWELL]. The field of inventory control has almost split off as an independent branch of Dynamic Programming on which a great deal of effort has been expended [ARRoW, KARLIN, SCARF], [WIDTIN] , [WAGNER]. Dynamic Programming is also playing an in­ creasing role in modem mathematical control theory [BELLMAN, Adap­ tive Control Processes (1961)]. Some of the most exciting work is going on in adaptive programming which is closely related to sequential statistical analysis, particularly in its Bayesian form. In this monograph the reader is introduced to the basic ideas of Dynamic Programming.
One. Finite Alternatives.- § 1. Introduction.- § 2. Geometric Interpretation.- § 3. Principle of Optimality.- § 4. Value Functions for Infinite Horizons: Value Iteration.- § 5. Policy Iteration.- § 6. Stability Properties.- § 7. Problems without Discount and with Infinite Horizon.- § 8. Automobile Replacement.- § 9. Linear Programming and Dynamic Programming.- References and Selected Reading to Part One.- Two. Risk.- § 10. Basic Concepts.- § 11. The Value Function.- § 12. The Principle of Optimality.- § 13. Policy Iteration.- § 14. Stability Properties.- § 15. Solution by Linear Programming.- § 16. Machine Care.- § 17. Inventory Control.- § 18. Uncertainty: Adaptive Programming.- § 19. Exponential Weighting.- References and Selected Reading to Part Two.- Three. Continuous Decision Variable.- § 20. An Allocation Problem.- § 21. General Theory.- § 22. Linear Inhomogeneous Problems.- § 23. A Turnpike Theorem.- § 24. Sequential Programming.- § 25. Risk.- § 26. Quadratic Criterion Function.- References and Selected Reading to Part Three.- Four. Decision Processes in Continuous Time.- § 27. Discrete Action.- § 28. Variable Level.- § 29. Risk of Termination.- § 30. Discontinuous Processes—Repetitive Decisions.- § 31. Continuous Time Inventory Control.- § 32. Continuous Action—Steady State Problems.- § 33. The Principle of Optimality in Differential Equation Form.- § 34. Dynamic Programming and the Calculus of Variations.- § 35. Variation under Constraints: The Maximum Principle.- References and Selected Reading to Part Four.- Author Index.