Article ID: | iaor1988552 |
Country: | United States |
Volume: | 37 |
Issue: | 2 |
Start Page Number: | 240 |
End Page Number: | 254 |
Publication Date: | Mar 1989 |
Journal: | Operations Research |
Authors: | Lane Daniel E. |
Keywords: | markov processes, programming: dynamic |
This paper presents an application of a partially observable Markov decision process for the intraseasonal decisions of fishing vessel operators. Throughout each fishing season, independent vessel operators must decide in which zone or fishing ground of the fishery to fish during each period to catch the most fish with the highest return to fishing effort. Fishermen’s decisions are assumed to be made to maximize net operating income. The decision model incorporates the potential fish catch, the cost of the fishing effort, and the unit price of fish. Catch potential is modeled by considering the abundance of the fish stock and the catchability of the fishing technique. Abundance dynamics not observed directly are modeled as a Markov chain with a parsimonious state-space representation, which renders the problem practicable. Dynamic decision policies are computed by the method of optimal control of the process over a finite horizon. The resultant policies are used to simulate distributions of fishermen’s net operating income, fishing effort dynamics, and catch statistics. The model may be used as a decision aid in the regulation of the common property fisheries resource.