from simulations.maze_env import MazeEnv
from simulations.point import PointEnv
from simulations.maze_task import CustomGoalReward4Rooms
env = MazeEnv(PointEnv, CustomRewardGoal4Rooms)
- Check and update
def step(**kwargs)anddef _set_action_space(**kwargs)in simulations/point.py - Check and update
def _get_obs(**kwargs)anddef _set_observation_space(**kwargs)in simulations/maze_env.py and simulations/point.py - Update Reward and Info Computation Pipeline
- Check and update
class Actorandclass Criticin utils/rtd3_utils.py - Check and update
class Autoencoder,class VisualCortexV4,class EncoderBodyandclass FeatureExtractionBackbonein bg/models.py - Check and update
def update_policy(**kwargs)anddef _sample_action(**kwargs)in utils/rtd3_utils.py - Check and update
def predict(**kwargs)in utils/rtd3_utils.py - Ensure
squash_outputis False inclass Actorif using inverted gradients method - Check and update
seed,batch_size,learning_starts,imitation_stepsanddebuginparamsfrom constant.py - Check and update other hyperparameters for tuning
- Set
LOGDIR,ENV_TYPE,TIMESTEPS,MAX_EPISODE_SIZE,HISTORY_STEPSandTASK_VERSIONin train.sh - Check and update
net_archandn_criticsin learning/explore.py - Run
sh train.shin terminal for debugging. Runnohup sh train.sh >> assets/out/models/train.log &in terminal for GPU execution. - Commit and Push to Github. Update local
experiments_log
- Run
docker build -t neuroengineering-tools ./docker/cpu/in the root folder of the repository. - Run
docker run -dit --name sample neuroengineering-toolsto spawn a new container. - Run desired command with the tools in the container.
- Run
docker stop neuroengineering-toolsto stop running container.