robcaulk
|
340d2191ff
|
deactivate tensorboard by default
|
2023-05-14 14:39:23 +00:00 |
|
robcaulk
|
55a1a3afd6
|
add config option for activating and deactivating tensorboard logger, ensure the various flavors are never activated simultaneously
|
2023-05-14 14:08:00 +00:00 |
|
robcaulk
|
31d15da49e
|
add disclaimers everywhere about how example strategies are meant as examples
|
2023-05-12 08:16:48 +00:00 |
|
Matthias
|
8cf0e4a316
|
Fix mypy typing errors
|
2023-04-26 19:43:42 +02:00 |
|
initrv
|
cfc0410388
|
use rl_config get instead of freqai_info
|
2023-04-02 04:08:07 +03:00 |
|
initrv
|
cab82e8e60
|
Add sb3 learn progress bar
|
2023-04-02 02:59:02 +03:00 |
|
initrv
|
82cb107520
|
add tensorboard category
|
2023-03-12 01:32:55 +03:00 |
|
robcaulk
|
0f6b98b69a
|
merge develop into tensorboard cleanup
|
2022-12-11 15:38:32 +01:00 |
|
robcaulk
|
0fd8e214e4
|
add documentation for tensorboard_log, change how users interact with tensorboard_log
|
2022-12-11 15:31:29 +01:00 |
|
initrv
|
cb8fc3c8c7
|
custom info to tensorboard_metrics
|
2022-12-11 15:37:45 +03:00 |
|
Emre
|
272c3302e3
|
Merge remote-tracking branch 'origin/develop' into update-freqai-tf-handling
|
2022-12-11 13:12:45 +03:00 |
|
initrv
|
58604c747e
|
cleanup tensorboard callback
|
2022-12-07 14:37:55 +03:00 |
|
Emre
|
e734b39929
|
Make model_training_parameters optional
|
2022-12-05 14:54:42 +03:00 |
|
robcaulk
|
24766928ba
|
reorganize/generalize tensorboard callback
|
2022-12-04 13:54:30 +01:00 |
|
smarmau
|
d6f45a12ae
|
add multiproc fix flake8
|
2022-12-03 22:30:04 +11:00 |
|
smarmau
|
075c8c23c8
|
add state/action info to callbacks
|
2022-12-03 21:16:04 +11:00 |
|
robcaulk
|
81fd2e588f
|
ensure typing, remove unsued code
|
2022-11-26 12:11:59 +01:00 |
|
robcaulk
|
8dbfd2cacf
|
improve docstring clarity about how to inherit from ReinforcementLearner, demonstrate inherittance with ReinforcementLearner_multiproc
|
2022-11-26 11:51:08 +01:00 |
|
robcaulk
|
6394ef4558
|
fix docstrings
|
2022-11-13 17:43:52 +01:00 |
|
robcaulk
|
8d7adfabe9
|
clean RL tests to avoid dir pollution and increase speed
|
2022-10-08 12:10:38 +02:00 |
|
robcaulk
|
936ca24482
|
separate RL install from general FAI install, update docs
|
2022-10-05 15:58:54 +02:00 |
|
robcaulk
|
77c360b264
|
improve typing, improve docstrings, ensure global tests pass
|
2022-09-23 19:17:27 +02:00 |
|
robcaulk
|
8aac644009
|
add tests. add guardrails.
|
2022-09-15 00:46:35 +02:00 |
|
robcaulk
|
240b529533
|
fix tensorboard path so that users can track all historical models
|
2022-08-31 16:50:39 +02:00 |
|
robcaulk
|
7766350c15
|
refactor environment inheritence tree to accommodate flexible action types/counts. fix bug in train profit handling
|
2022-08-28 19:21:57 +02:00 |
|
robcaulk
|
3199eb453b
|
reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use.
|
2022-08-25 19:05:51 +02:00 |
|
robcaulk
|
94cfc8e63f
|
fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward
|
2022-08-25 11:46:18 +02:00 |
|
robcaulk
|
d1bee29b1e
|
improve default reward, fix bugs in environment
|
2022-08-24 18:32:40 +02:00 |
|
robcaulk
|
bd870e2331
|
fix monitor bug, set default values in case user doesnt set params
|
2022-08-24 16:32:14 +02:00 |
|
robcaulk
|
c0cee5df07
|
add continual retraining feature, handly mypy typing reqs, improve docstrings
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
b26ed7dea4
|
fix generic reward, add time duration to reward
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
29f0e01c4a
|
expose environment reward parameters to the user config
|
2022-08-24 13:00:55 +02:00 |
|
robcaulk
|
3eb897c2f8
|
reuse callback, allow user to acces all stable_baselines3 agents via config
|
2022-08-24 13:00:55 +02:00 |
|