Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update : Missing observation space while linking Model with environment #2

Open
benjamin-arfa opened this issue Jul 4, 2021 · 2 comments

Comments

@benjamin-arfa
Copy link

Code:

import energym
from energym.examples.Controller import LabController
from energym.wrappers.rl_wrapper import RLWrapper
import gym
from stable_baselines3 import PPO

envName = "Apartments2Thermal-v0"
env = energym.make(envName, weather = "ESP_CT_Barcelona", simulation_days=300)
reward = lambda output : 1/(abs(output['Z01_T']-22)+0.1)
env_RL = RLWrapper(env, reward)
inputs = env_RL.get_inputs_names()
model = PPO('MlpPolicy', env_RL, verbose=1)
model.learn(total_timesteps=4800)

Here is the error traceback:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-142-2f18c2afcb80> in <module>
----> 1 model = PPO('MlpPolicy', env_RL, verbose=1)
      2 model.learn(total_timesteps=4800)

~/.local/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py in __init__(self, policy, env, learning_rate, n_steps, batch_size, n_epochs, gamma, gae_lambda, clip_range, clip_range_vf, ent_coef, vf_coef, max_grad_norm, use_sde, sde_sample_freq, target_kl, tensorboard_log, create_eval_env, policy_kwargs, verbose, seed, device, _init_setup_model)
     90     ):
     91 
---> 92         super(PPO, self).__init__(
     93             policy,
     94             env,

~/.local/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py in __init__(self, policy, env, learning_rate, n_steps, gamma, gae_lambda, ent_coef, vf_coef, max_grad_norm, use_sde, sde_sample_freq, tensorboard_log, create_eval_env, monitor_wrapper, policy_kwargs, verbose, seed, device, _init_setup_model)
     72     ):
     73 
---> 74         super(OnPolicyAlgorithm, self).__init__(
     75             policy=policy,
     76             env=env,

~/.local/lib/python3.8/site-packages/stable_baselines3/common/base_class.py in __init__(self, policy, env, policy_base, learning_rate, policy_kwargs, tensorboard_log, verbose, device, support_multi_env, create_eval_env, monitor_wrapper, seed, use_sde, sde_sample_freq)
    155 
    156             env = maybe_make_env(env, monitor_wrapper, self.verbose)
--> 157             env = self._wrap_env(env, self.verbose)
    158 
    159             self.observation_space = env.observation_space

~/.local/lib/python3.8/site-packages/stable_baselines3/common/base_class.py in _wrap_env(env, verbose)
    175             if verbose >= 1:
    176                 print("Wrapping the env in a DummyVecEnv.")
--> 177             env = DummyVecEnv([lambda: env])
    178 
    179         if is_image_space(env.observation_space) and not is_wrapped(env, VecTransposeImage):

~/.local/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py in __init__(self, env_fns)
     25         self.envs = [fn() for fn in env_fns]
     26         env = self.envs[0]
---> 27         VecEnv.__init__(self, len(env_fns), env.observation_space, env.action_space)
     28         obs_space = env.observation_space
     29         self.keys, shapes, dtypes = obs_space_info(obs_space)

/usr/local/lib/python3.8/dist-packages/energym-0.1-py3.8.egg/energym/envs/env.py in __getattr__(self, name)
    164                 "attempted to get missing private attribute '{}'".format(name)
    165             )
--> 166         return getattr(self.env, name)
    167 
    168     @classmethod

AttributeError: 'Apartments2' object has no attribute 'observation_space'
@pscharnho
Copy link
Collaborator

It would indeed be nice to directly use stable baselines for Energym, I'll have a look into modifying the RLWrapper to make it work.

@pscharnho
Copy link
Collaborator

I created a new branch for the modified RLWrapper (2-modify-rlwrapper), you can test the version from there and if everything works fine, we will close this.

@psh987 psh987 mentioned this issue Jan 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants