Welcome to blobrl’s documentation!¶
Installation¶
Installation of pytorch¶
For installing pytorch follow Quick Start Locally for your config.
Installation of blobrl¶
Download files:
git clone https://github.com/french-ai/reinforcement.git
Move to reinforcement directory:
cd reinforcement
Install blobrl
- to use it:
pip install .
- to help development:
pip install ".[dev]" .
Getting started¶
Install BlobRL¶
Follow installation.
Initializing an environment¶
import gym
env = gym.make("CartPole-v1")
Initializing an agent¶
from blobrl.agents import AgentRandom
action_space = env.action_space
observation_space = env.observation_space
agent = AgentRandom(observation_space=observation_space, action_space=action_space)
Training¶
Create Trainer
from blobrl import Trainer
trainer = Trainer(environment=env, agent=agent)
Start training:
trainer.train(render=True)
Visualize training metrics:
tensorboard --logdir runs
Evaluation¶
Not implemented yet
Trainer – train.py¶
You can start training by using train.py.
Parameters¶
–agent:
StringDefault : agent_randomName of agent listed [agent_random, dqn, double_dqn, categorical_dqn]
–env:
StringDefault : CartPole-v1Name of gym environment listed in gyms.openai.com
–max_episode
IntegerDefault : 100Number of episode to train
–render
BooleanDefault : FalseShow render on each step or not
Exemples¶
Start training with DQN on CartPole-v1 with 1000 episodes and show environment
python train.py --agent dqn --env CartPole-v1 --render 1 --max_episode 1000
Agent interface¶
Agent_random¶
DQN¶
Double_dqn¶
Categorical_dqn¶
Environments package¶
We use gym environment to begin.
You can see gyms.openai.com for more informations.
We will add more environment.