Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 473 Bytes

File metadata and controls

9 lines (7 loc) · 473 Bytes

Implement a passive learning agent in a simple environment, such as the $4\times 3$ world. For the case of an initially unknown environment model, compare the learning performance of the direct utility estimation, TD, and ADP algorithms. Do the comparison for the optimal policy and for several random policies. For which do the utility estimates converge faster? What happens when the size of the environment is increased? (Try environments with and without obstacles.)