Matthias
|
3fb5cd3df6
|
Improve formatting
|
2023-04-17 20:27:18 +02:00 |
|
Richard Jozsa
|
66c326b789
|
Add proper handling of multiple environments
|
2023-03-20 15:54:58 +01:00 |
|
Richard Jozsa
|
d03fe1f8ee
|
add latest experimental version of gymnasium
|
2023-03-16 00:53:37 +01:00 |
|
robcaulk
|
d10ee0979a
|
ensure training_features_list is updated properly
|
2023-03-08 19:37:11 +01:00 |
|
Robert Caulk
|
85e345fc48
|
Update BaseReinforcementLearningModel.py
|
2023-03-08 19:29:39 +01:00 |
|
robcaulk
|
29d337fa02
|
ensure ohlc is dropped from both train and predict
|
2023-03-08 11:26:28 +01:00 |
|
robcaulk
|
d9dc831772
|
allow user to drop ohlc from features in RL
|
2023-03-07 11:33:54 +01:00 |
|
robcaulk
|
8873a565ee
|
expose raw features to the environment for use in calculate_reward
|
2023-02-10 15:48:18 +01:00 |
|
robcaulk
|
154b6711b3
|
use function level noqa ignore
|
2023-02-10 15:26:17 +01:00 |
|
robcaulk
|
4fc0edb8b7
|
add pair to environment for access inside calculate_reward
|
2023-02-10 14:45:50 +01:00 |
|
robcaulk
|
b2bab68fba
|
move price assignment to feature_engineering_standard() to reduce un-requested feature additions in RL. Ensure old method of price assignment still works, add deprecation warning to help users migrate their strategies
|
2022-12-30 13:02:39 +01:00 |
|
robcaulk
|
6f7eb71bbb
|
ensure RL works with new naming scheme
|
2022-12-28 14:52:33 +01:00 |
|
robcaulk
|
c2936d551b
|
improve doc, update test strats, change function names
|
2022-12-28 13:25:40 +01:00 |
|
Emre
|
dde363343c
|
Add can_short param to base env
|
2022-12-16 22:16:19 +03:00 |
|
robcaulk
|
581a5296cc
|
fix docstrings to reflect new env_info changes
|
2022-12-15 16:50:08 +01:00 |
|
robcaulk
|
7b4abd5ef5
|
use a dictionary to make code more readable
|
2022-12-15 12:25:33 +01:00 |
|
Emre
|
2018da0767
|
Add env_info dict to base environment
|
2022-12-14 22:03:05 +03:00 |
|
robcaulk
|
2285ca7d2a
|
add dp to multiproc
|
2022-12-14 18:22:20 +01:00 |
|
robcaulk
|
24766928ba
|
reorganize/generalize tensorboard callback
|
2022-12-04 13:54:30 +01:00 |
|
smarmau
|
b2edc58089
|
fix flake8
|
2022-12-03 22:31:02 +11:00 |
|
smarmau
|
469aa0d43f
|
add state/action info to callbacks
|
2022-12-03 21:16:46 +11:00 |
|
Emre
|
4a9982f86b
|
Fix sb3_contrib loading issue
|
2022-12-01 10:08:42 +03:00 |
|
robcaulk
|
e7f72d52b8
|
bring back market side setting in get_state_info
|
2022-11-30 12:36:26 +01:00 |
|
Emre
|
9cbfa12011
|
Directly set model_type in base RL model
|
2022-11-28 16:02:17 +03:00 |
|
Matthias
|
7ebc8ee169
|
Fix missing Optional typehint
|
2022-11-26 13:32:18 +01:00 |
|
Matthias
|
bdfedb5fcb
|
Improve typehints / reduce warnings from mypy
|
2022-11-26 13:03:07 +01:00 |
|
robcaulk
|
3a07749fcc
|
fix docstring
|
2022-11-24 18:46:54 +01:00 |
|
Matthias
|
8f1a8c752b
|
Add freqairl docker build process
|
2022-11-24 07:00:12 +01:00 |
|
robcaulk
|
60fcd8dce2
|
fix skipped mac test, fix RL bug in add_state_info, fix use of __import__, revise doc
|
2022-11-17 21:50:02 +01:00 |
|
robcaulk
|
6394ef4558
|
fix docstrings
|
2022-11-13 17:43:52 +01:00 |
|
robcaulk
|
af9e400562
|
add test coverage, fix bug in base environment. Ensure proper fee is used.
|
2022-11-13 15:31:37 +01:00 |
|
robcaulk
|
81f800a79b
|
switch to using FT calc_profi_pct, reverse entry/exit fees
|
2022-11-13 13:41:17 +01:00 |
|
robcaulk
|
e71a8b8ac1
|
add ability to integrate state info or not, and prevent state info integration during backtesting
|
2022-11-12 18:46:48 +01:00 |
|
robcaulk
|
9c6b97c678
|
ensure normalization acceleration methods are employed in RL
|
2022-11-12 12:01:59 +01:00 |
|
robcaulk
|
6746868ea7
|
store dataprovider to self instead of strategy
|
2022-11-12 11:33:03 +01:00 |
|
robcaulk
|
8d7adfabe9
|
clean RL tests to avoid dir pollution and increase speed
|
2022-10-08 12:10:38 +02:00 |
|
robcaulk
|
488739424d
|
fix reward inconsistency in template
|
2022-10-05 20:55:50 +02:00 |
|
robcaulk
|
cf882fa84e
|
fix tests
|
2022-10-01 20:26:41 +02:00 |
|
Robert Caulk
|
555cc42630
|
Ensure 1 thread is available (for testing purposes)
|
2022-09-29 14:00:09 +02:00 |
|
Robert Caulk
|
dcf6ebe273
|
Update BaseReinforcementLearningModel.py
|
2022-09-29 00:37:03 +02:00 |
|
robcaulk
|
83343dc2f1
|
control number of threads, update doc
|
2022-09-29 00:10:18 +02:00 |
|
robcaulk
|
647200e8a7
|
isort
|
2022-09-23 19:30:56 +02:00 |
|
robcaulk
|
77c360b264
|
improve typing, improve docstrings, ensure global tests pass
|
2022-09-23 19:17:27 +02:00 |
|
robcaulk
|
7295ba0fb2
|
add test for Base4ActionEnv
|
2022-09-22 23:42:33 +02:00 |
|
robcaulk
|
7b1d409c98
|
fix mypy/flake8
|
2022-09-17 17:51:06 +02:00 |
|
robcaulk
|
3b97b3d5c8
|
fix mypy error for strategy
|
2022-09-15 00:56:51 +02:00 |
|
robcaulk
|
8aac644009
|
add tests. add guardrails.
|
2022-09-15 00:46:35 +02:00 |
|
robcaulk
|
7766350c15
|
refactor environment inheritence tree to accommodate flexible action types/counts. fix bug in train profit handling
|
2022-08-28 19:21:57 +02:00 |
|
robcaulk
|
3199eb453b
|
reduce code for base use-case, ensure multiproc inherits custom env, add ability to limit ram use.
|
2022-08-25 19:05:51 +02:00 |
|
robcaulk
|
94cfc8e63f
|
fix multiproc callback, add continual learning to multiproc, fix totalprofit bug in env, set eval_freq automatically, improve default reward
|
2022-08-25 11:46:18 +02:00 |
|