function LRTA*-AGENT(s') returns an action
inputs: s', a percept that identifies the current state
persistent: result, a table, indexed by state and action, initially empty
H, a table of cost estimates indexed by state, initially empty
s, a, the previous state and action, initially null
if GOAL-TEST(s') then return stop
if s' is a new state (not in H) then H[s'] ← h(s')
if s is not null
result[s, a] ← s'
H[s] ← min LRTA*-COST(s, b, result[s, b], H)
b ∈ ACTIONS(s)
a ← an action b in ACTIONS(s') that minimizes LRTA*-COST(s', b, result[s', b], H)
s ← s'
return a
function LRTA*-COST(s, a, s', H) returns a cost estimate
if s' is undefined then return h(s)
else return c(s, a, s') + H[s']
Figure ?? LRTA*-AGENT selects an action according to the values of neighboring states, which are updated as the agent moves about the state space.