mirror of
https://github.com/freqtrade/freqtrade.git
synced 2024-11-10 10:21:59 +00:00
13 KiB
13 KiB
Parameter table
The table below will list all configuration parameters available for FreqAI. Some of the parameters are exemplified in config_examples/config_freqai.example.json
.
Mandatory parameters are marked as Required and have to be set in one of the suggested ways.
Parameter | Description |
---|---|
General configuration parameters | |
freqai |
Required. The parent dictionary containing all the parameters for controlling FreqAI. Datatype: Dictionary. |
train_period_days |
Required. Number of days to use for the training data (width of the sliding window). Datatype: Positive integer. |
backtest_period_days |
Required. Number of days to inference from the trained model before sliding the train_period_days window defined above, and retraining the model during backtesting (more info here). This can be fractional days, but beware that the provided timerange will be divided by this number to yield the number of trainings necessary to complete the backtest. Datatype: Float. |
identifier |
Required. A unique ID for the current model. If models are saved to disk, the identifier allows for reloading specific pre-trained models/data. Datatype: String. |
live_retrain_hours |
Frequency of retraining during dry/live runs. Datatype: Float > 0. Default: 0 (models retrain as often as possible). |
expiration_hours |
Avoid making predictions if a model is more than expiration_hours old. Datatype: Positive integer. Default: 0 (models never expire). |
purge_old_models |
Delete obsolete models. Datatype: Boolean. Default: False (all historic models remain on disk). |
save_backtest_models |
Save models to disk when running backtesting. Backtesting operates most efficiently by saving the prediction data and reusing them directly for subsequent runs (when you wish to tune entry/exit parameters). Saving backtesting models to disk also allows to use the same model files for starting a dry/live instance with the same model identifier . Datatype: Boolean. Default: False (no models are saved). |
fit_live_predictions_candles |
Number of historical candles to use for computing target (label) statistics from prediction data, instead of from the training dataset (more information can be found here). Datatype: Positive integer. |
follow_mode |
Use a follower that will look for models associated with a specific identifier and load those for inferencing. A follower will not train new models. Datatype: Boolean. Default: False . |
continual_learning |
Use the final state of the most recently trained model as starting point for the new model, allowing for incremental learning (more information can be found here). Datatype: Boolean. Default: False . |
write_metrics_to_disk |
Collect train timings, inference timings and cpu usage in json file. Datatype: Boolean. Default: False |
Feature parameters | |
feature_parameters |
A dictionary containing the parameters used to engineer the feature set. Details and examples are shown here. Datatype: Dictionary. |
include_timeframes |
A list of timeframes that all indicators in populate_any_indicators will be created for. The list is added as features to the base indicators dataset. Datatype: List of timeframes (strings). |
include_corr_pairlist |
A list of correlated coins that FreqAI will add as additional features to all pair_whitelist coins. All indicators set in populate_any_indicators during feature engineering (see details here) will be created for each correlated coin. The correlated coins features are added to the base indicators dataset. Datatype: List of assets (strings). |
label_period_candles |
Number of candles into the future that the labels are created for. This is used in populate_any_indicators (see templates/FreqaiExampleStrategy.py for detailed usage). You can create custom labels and choose whether to make use of this parameter or not. Datatype: Positive integer. |
include_shifted_candles |
Add features from previous candles to subsequent candles with the intent of adding historical information. If used, FreqAI will duplicate and shift all features from the include_shifted_candles previous candles so that the information is available for the subsequent candle. Datatype: Positive integer. |
weight_factor |
Weight training data points according to their recency (see details here). Datatype: Positive float (typically < 1). |
indicator_max_period_candles |
No longer used (#7325). Replaced by startup_candle_count which is set in the strategy. startup_candle_count is timeframe independent and defines the maximum period used in populate_any_indicators() for indicator creation. FreqAI uses this parameter together with the maximum timeframe in include_time_frames to calculate how many data points to download such that the first data point does not include a NaN. Datatype: Positive integer. |
indicator_periods_candles |
Time periods to calculate indicators for. The indicators are added to the base indicator dataset. Datatype: List of positive integers. |
principal_component_analysis |
Automatically reduce the dimensionality of the data set using Principal Component Analysis. See details about how it works here Datatype: Boolean. Default: False . |
plot_feature_importances |
Create a feature importance plot for each model for the top/bottom plot_feature_importances number of features. Datatype: Integer. Default: 0 . |
DI_threshold |
Activates the use of the Dissimilarity Index for outlier detection when set to > 0. See details about how it works here. Datatype: Positive float (typically < 1). |
use_SVM_to_remove_outliers |
Train a support vector machine to detect and remove outliers from the training dataset, as well as from incoming data points. See details about how it works here. Datatype: Boolean. |
svm_params |
All parameters available in Sklearn's SGDOneClassSVM() . See details about some select parameters here. Datatype: Dictionary. |
use_DBSCAN_to_remove_outliers |
Cluster data using the DBSCAN algorithm to identify and remove outliers from training and prediction data. See details about how it works here. Datatype: Boolean. |
inlier_metric_window |
If set, FreqAI adds an inlier_metric to the training feature set and set the lookback to be the inlier_metric_window , i.e., the number of previous time points to compare the current candle to. Details of how the inlier_metric is computed can be found here. Datatype: Integer. Default: 0 . |
noise_standard_deviation |
If set, FreqAI adds noise to the training features with the aim of preventing overfitting. FreqAI generates random deviates from a gaussian distribution with a standard deviation of noise_standard_deviation and adds them to all data points. noise_standard_deviation should be kept relative to the normalized space, i.e., between -1 and 1. In other words, since data in FreqAI is always normalized to be between -1 and 1, noise_standard_deviation: 0.05 would result in 32% of the data being randomly increased/decreased by more than 2.5% (i.e., the percent of data falling within the first standard deviation). Datatype: Integer. Default: 0 . |
outlier_protection_percentage |
Enable to prevent outlier detection methods from discarding too much data. If more than outlier_protection_percentage % of points are detected as outliers by the SVM or DBSCAN, FreqAI will log a warning message and ignore outlier detection, i.e., the original dataset will be kept intact. If the outlier protection is triggered, no predictions will be made based on the training dataset. Datatype: Float. Default: 30 . |
reverse_train_test_order |
Split the feature dataset (see below) and use the latest data split for training and test on historical split of the data. This allows the model to be trained up to the most recent data point, while avoiding overfitting. However, you should be careful to understand the unorthodox nature of this parameter before employing it. Datatype: Boolean. Default: False (no reversal). |
Data split parameters | |
data_split_parameters |
Include any additional parameters available from Scikit-learn test_train_split() , which are shown here (external website). Datatype: Dictionary. |
test_size |
The fraction of data that should be used for testing instead of training. Datatype: Positive float < 1. |
shuffle |
Shuffle the training data points during training. Typically, to not remove the chronological order of data in time-series forecasting, this is set to False . Datatype: Boolean. Defaut: False . |
Model training parameters | |
model_training_parameters |
A flexible dictionary that includes all parameters available by the selected model library. For example, if you use LightGBMRegressor , this dictionary can contain any parameter available by the LightGBMRegressor here (external website). If you select a different model, this dictionary can contain any parameter from that model. A list of the currently available models can be found here. Datatype: Dictionary. |
n_estimators |
The number of boosted trees to fit in the training of the model. Datatype: Integer. |
learning_rate |
Boosting learning rate during training of the model. Datatype: Float. |
n_jobs , thread_count , task_type |
Set the number of threads for parallel processing and the task_type (gpu or cpu ). Different model libraries use different parameter names. Datatype: Float. |
Reinforcement Learning Parameters* | |
rl_config |
A dictionary containing the control parameters for a Reinforcement Learning model. Datatype: Dictionary. |
train_cycles |
Training time steps will be set based on the `train_cycles * number of training data points. Datatype: Integer. |
cpu_count |
Number of processors to dedicate to the Reinforcement Learning training process. Datatype: int. |
max_trade_duration_candles |
Guides the agent training to keep trades below desired length. Example usage shown in prediction_models/ReinforcementLearner.py within the user customizable calculate_reward() Datatype: int. |
model_type |
Model string from stable_baselines3 or SBcontrib. Available strings include: 'TRPO', 'ARS', 'RecurrentPPO', 'MaskablePPO', 'PPO', 'A2C', 'DQN' . User should ensure that model_training_parameters match those available to the corresponding stable_baselines3 model by visiting their documentaiton. PPO doc (external website) Datatype: string. |
policy_type |
One of the available policy types from stable_baselines3 Datatype: string. |
max_training_drawdown_pct |
The maximum drawdown that the agent is allowed to experience during training. Datatype: float. Default: 0.8 |
cpu_count |
Number of threads/cpus to dedicate to the Reinforcement Learning training process (depending on if ReinforcementLearning_multiproc is selected or not). Datatype: int. |
model_reward_parameters |
Parameters used inside the user customizable calculate_reward() function in ReinforcementLearner.py Datatype: int. |
Extraneous parameters | |
keras |
If the selected model makes use of Keras (typical for Tensorflow-based prediction models), this flag needs to be activated so that the model save/loading follows Keras standards. Datatype: Boolean. Default: False . |
conv_width |
The width of a convolutional neural network input tensor. This replaces the need for shifting candles (include_shifted_candles ) by feeding in historical data points as the second dimension of the tensor. Technically, this parameter can also be used for regressors, but it only adds computational overhead and does not change the model training/prediction. Datatype: Integer. Default: 2 . |