mirror of
https://github.com/freqtrade/freqtrade.git
synced 2024-11-10 02:12:01 +00:00
Merge remote-tracking branch 'upstream/develop' into Error-Skipping
This commit is contained in:
commit
0008a87232
4
.github/workflows/ci.yml
vendored
4
.github/workflows/ci.yml
vendored
|
@ -129,7 +129,7 @@ jobs:
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ "macos-latest", "macos-13", "macos-14" ]
|
os: [ "macos-12", "macos-13", "macos-14" ]
|
||||||
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
python-version: ["3.9", "3.10", "3.11", "3.12"]
|
||||||
exclude:
|
exclude:
|
||||||
- os: "macos-14"
|
- os: "macos-14"
|
||||||
|
@ -414,7 +414,7 @@ jobs:
|
||||||
pytest --random-order --longrun --durations 20 -n auto
|
pytest --random-order --longrun --durations 20 -n auto
|
||||||
|
|
||||||
|
|
||||||
# Notify only once - when CI completes (and after deploy) in case it's successfull
|
# Notify only once - when CI completes (and after deploy) in case it's successful
|
||||||
notify-complete:
|
notify-complete:
|
||||||
needs: [
|
needs: [
|
||||||
build-linux,
|
build-linux,
|
||||||
|
|
|
@ -31,7 +31,7 @@ repos:
|
||||||
|
|
||||||
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
||||||
# Ruff version.
|
# Ruff version.
|
||||||
rev: 'v0.3.5'
|
rev: 'v0.4.1'
|
||||||
hooks:
|
hooks:
|
||||||
- id: ruff
|
- id: ruff
|
||||||
|
|
||||||
|
@ -54,3 +54,10 @@ repos:
|
||||||
(?x)^(
|
(?x)^(
|
||||||
.*\.md
|
.*\.md
|
||||||
)$
|
)$
|
||||||
|
|
||||||
|
- repo: https://github.com/codespell-project/codespell
|
||||||
|
rev: v2.2.6
|
||||||
|
hooks:
|
||||||
|
- id: codespell
|
||||||
|
additional_dependencies:
|
||||||
|
- tomli
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# File used in CI to ensure pre-commit dependencies are kept uptodate.
|
# File used in CI to ensure pre-commit dependencies are kept up-to-date.
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
Binary file not shown.
Binary file not shown.
|
@ -36,7 +36,7 @@ freqtrade backtesting-analysis -c <config.json> --analysis-groups 0 1 2 3 4 5
|
||||||
```
|
```
|
||||||
|
|
||||||
This command will read from the last backtesting results. The `--analysis-groups` option is
|
This command will read from the last backtesting results. The `--analysis-groups` option is
|
||||||
used to specify the various tabular outputs showing the profit fo each group or trade,
|
used to specify the various tabular outputs showing the profit of each group or trade,
|
||||||
ranging from the simplest (0) to the most detailed per pair, per buy and per sell tag (4):
|
ranging from the simplest (0) to the most detailed per pair, per buy and per sell tag (4):
|
||||||
|
|
||||||
* 0: overall winrate and profit summary by enter_tag
|
* 0: overall winrate and profit summary by enter_tag
|
||||||
|
|
|
@ -587,7 +587,7 @@ These precision values are based on current exchange limits (as described in the
|
||||||
|
|
||||||
## Improved backtest accuracy
|
## Improved backtest accuracy
|
||||||
|
|
||||||
One big limitation of backtesting is it's inability to know how prices moved intra-candle (was high before close, or viceversa?).
|
One big limitation of backtesting is it's inability to know how prices moved intra-candle (was high before close, or vice-versa?).
|
||||||
So assuming you run backtesting with a 1h timeframe, there will be 4 prices for that candle (Open, High, Low, Close).
|
So assuming you run backtesting with a 1h timeframe, there will be 4 prices for that candle (Open, High, Low, Close).
|
||||||
|
|
||||||
While backtesting does take some assumptions (read above) about this - this can never be perfect, and will always be biased in one way or the other.
|
While backtesting does take some assumptions (read above) about this - this can never be perfect, and will always be biased in one way or the other.
|
||||||
|
|
|
@ -547,7 +547,7 @@ is automatically cancelled by the exchange.
|
||||||
**PO (Post only):**
|
**PO (Post only):**
|
||||||
|
|
||||||
Post only order. The order is either placed as a maker order, or it is canceled.
|
Post only order. The order is either placed as a maker order, or it is canceled.
|
||||||
This means the order must be placed on orderbook for at at least time in an unfilled state.
|
This means the order must be placed on orderbook for at least time in an unfilled state.
|
||||||
|
|
||||||
#### time_in_force config
|
#### time_in_force config
|
||||||
|
|
||||||
|
|
|
@ -261,7 +261,7 @@ For that reason, they must implement the following methods:
|
||||||
|
|
||||||
The `until` portion should be calculated using the provided `calculate_lock_end()` method.
|
The `until` portion should be calculated using the provided `calculate_lock_end()` method.
|
||||||
|
|
||||||
All Protections should use `"stop_duration"` / `"stop_duration_candles"` to define how long a a pair (or all pairs) should be locked.
|
All Protections should use `"stop_duration"` / `"stop_duration_candles"` to define how long a pair (or all pairs) should be locked.
|
||||||
The content of this is made available as `self._stop_duration` to the each Protection.
|
The content of this is made available as `self._stop_duration` to the each Protection.
|
||||||
|
|
||||||
If your protection requires a look-back period, please use `"lookback_period"` / `"lockback_period_candles"` to keep all protections aligned.
|
If your protection requires a look-back period, please use `"lookback_period"` / `"lockback_period_candles"` to keep all protections aligned.
|
||||||
|
|
|
@ -137,7 +137,7 @@ $$ R = \frac{\text{average_profit}}{\text{average_loss}} = \frac{\mu_{win}}{\mu_
|
||||||
|
|
||||||
### Expectancy
|
### Expectancy
|
||||||
|
|
||||||
By combining the Win Rate $W$ and and the Risk Reward ratio $R$ to create an expectancy ratio $E$. A expectance ratio is the expected return of the investment made in a trade. We can compute the value of $E$ as follows:
|
By combining the Win Rate $W$ and the Risk Reward ratio $R$ to create an expectancy ratio $E$. A expectance ratio is the expected return of the investment made in a trade. We can compute the value of $E$ as follows:
|
||||||
|
|
||||||
$$E = R * W - L$$
|
$$E = R * W - L$$
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
## Supported Markets
|
## Supported Markets
|
||||||
|
|
||||||
Freqtrade supports spot trading, as well as (isolated) futures trading for some selected exchanges. Please refer to the [documentation start page](index.md#supported-futures-exchanges-experimental) for an uptodate list of supported exchanges.
|
Freqtrade supports spot trading, as well as (isolated) futures trading for some selected exchanges. Please refer to the [documentation start page](index.md#supported-futures-exchanges-experimental) for an up-to-date list of supported exchanges.
|
||||||
|
|
||||||
### Can my bot open short positions?
|
### Can my bot open short positions?
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ In spot markets, you can in some cases use leveraged spot tokens, which reflect
|
||||||
|
|
||||||
### Can my bot trade options or futures?
|
### Can my bot trade options or futures?
|
||||||
|
|
||||||
Futures trading is supported for selected exchanges. Please refer to the [documentation start page](index.md#supported-futures-exchanges-experimental) for an uptodate list of supported exchanges.
|
Futures trading is supported for selected exchanges. Please refer to the [documentation start page](index.md#supported-futures-exchanges-experimental) for an up-to-date list of supported exchanges.
|
||||||
|
|
||||||
## Beginner Tips & Tricks
|
## Beginner Tips & Tricks
|
||||||
|
|
||||||
|
|
|
@ -31,7 +31,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
|
||||||
| `feature_parameters` | A dictionary containing the parameters used to engineer the feature set. Details and examples are shown [here](freqai-feature-engineering.md). <br> **Datatype:** Dictionary.
|
| `feature_parameters` | A dictionary containing the parameters used to engineer the feature set. Details and examples are shown [here](freqai-feature-engineering.md). <br> **Datatype:** Dictionary.
|
||||||
| `include_timeframes` | A list of timeframes that all indicators in `feature_engineering_expand_*()` will be created for. The list is added as features to the base indicators dataset. <br> **Datatype:** List of timeframes (strings).
|
| `include_timeframes` | A list of timeframes that all indicators in `feature_engineering_expand_*()` will be created for. The list is added as features to the base indicators dataset. <br> **Datatype:** List of timeframes (strings).
|
||||||
| `include_corr_pairlist` | A list of correlated coins that FreqAI will add as additional features to all `pair_whitelist` coins. All indicators set in `feature_engineering_expand_*()` during feature engineering (see details [here](freqai-feature-engineering.md)) will be created for each correlated coin. The correlated coins features are added to the base indicators dataset. <br> **Datatype:** List of assets (strings).
|
| `include_corr_pairlist` | A list of correlated coins that FreqAI will add as additional features to all `pair_whitelist` coins. All indicators set in `feature_engineering_expand_*()` during feature engineering (see details [here](freqai-feature-engineering.md)) will be created for each correlated coin. The correlated coins features are added to the base indicators dataset. <br> **Datatype:** List of assets (strings).
|
||||||
| `label_period_candles` | Number of candles into the future that the labels are created for. This is used in `feature_engineering_expand_all()` (see `templates/FreqaiExampleStrategy.py` for detailed usage). You can create custom labels and choose whether to make use of this parameter or not. <br> **Datatype:** Positive integer.
|
| `label_period_candles` | Number of candles into the future that the labels are created for. This can be used in `set_freqai_targets()` (see `templates/FreqaiExampleStrategy.py` for detailed usage). This parameter is not necessarily required, you can create custom labels and choose whether to make use of this parameter or not. Please see `templates/FreqaiExampleStrategy.py` to see the example usage. <br> **Datatype:** Positive integer.
|
||||||
| `include_shifted_candles` | Add features from previous candles to subsequent candles with the intent of adding historical information. If used, FreqAI will duplicate and shift all features from the `include_shifted_candles` previous candles so that the information is available for the subsequent candle. <br> **Datatype:** Positive integer.
|
| `include_shifted_candles` | Add features from previous candles to subsequent candles with the intent of adding historical information. If used, FreqAI will duplicate and shift all features from the `include_shifted_candles` previous candles so that the information is available for the subsequent candle. <br> **Datatype:** Positive integer.
|
||||||
| `weight_factor` | Weight training data points according to their recency (see details [here](freqai-feature-engineering.md#weighting-features-for-temporal-importance)). <br> **Datatype:** Positive float (typically < 1).
|
| `weight_factor` | Weight training data points according to their recency (see details [here](freqai-feature-engineering.md#weighting-features-for-temporal-importance)). <br> **Datatype:** Positive float (typically < 1).
|
||||||
| `indicator_max_period_candles` | **No longer used (#7325)**. Replaced by `startup_candle_count` which is set in the [strategy](freqai-configuration.md#building-a-freqai-strategy). `startup_candle_count` is timeframe independent and defines the maximum *period* used in `feature_engineering_*()` for indicator creation. FreqAI uses this parameter together with the maximum timeframe in `include_time_frames` to calculate how many data points to download such that the first data point does not include a NaN. <br> **Datatype:** Positive integer.
|
| `indicator_max_period_candles` | **No longer used (#7325)**. Replaced by `startup_candle_count` which is set in the [strategy](freqai-configuration.md#building-a-freqai-strategy). `startup_candle_count` is timeframe independent and defines the maximum *period* used in `feature_engineering_*()` for indicator creation. FreqAI uses this parameter together with the maximum timeframe in `include_time_frames` to calculate how many data points to download such that the first data point does not include a NaN. <br> **Datatype:** Positive integer.
|
||||||
|
@ -55,7 +55,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
|
||||||
| | **Data split parameters within the `freqai.data_split_parameters` sub dictionary**
|
| | **Data split parameters within the `freqai.data_split_parameters` sub dictionary**
|
||||||
| `data_split_parameters` | Include any additional parameters available from scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary.
|
| `data_split_parameters` | Include any additional parameters available from scikit-learn `test_train_split()`, which are shown [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) (external website). <br> **Datatype:** Dictionary.
|
||||||
| `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1.
|
| `test_size` | The fraction of data that should be used for testing instead of training. <br> **Datatype:** Positive float < 1.
|
||||||
| `shuffle` | Shuffle the training data points during training. Typically, to not remove the chronological order of data in time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean. <br> Defaut: `False`.
|
| `shuffle` | Shuffle the training data points during training. Typically, to not remove the chronological order of data in time-series forecasting, this is set to `False`. <br> **Datatype:** Boolean. <br> Default: `False`.
|
||||||
|
|
||||||
### Model training parameters
|
### Model training parameters
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
markdown==3.6
|
markdown==3.6
|
||||||
mkdocs==1.5.3
|
mkdocs==1.5.3
|
||||||
mkdocs-material==9.5.17
|
mkdocs-material==9.5.18
|
||||||
mdx_truly_sane_lists==1.3
|
mdx_truly_sane_lists==1.3
|
||||||
pymdown-extensions==10.7.1
|
pymdown-extensions==10.8
|
||||||
jinja2==3.1.3
|
jinja2==3.1.3
|
||||||
|
|
|
@ -89,7 +89,8 @@ Make sure that the following 2 lines are available in your docker-compose file:
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! Danger "Security warning"
|
!!! Danger "Security warning"
|
||||||
By using `8080:8080` in the docker port mapping, the API will be available to everyone connecting to the server under the correct port, so others may be able to control your bot.
|
By using `"8080:8080"` (or `"0.0.0.0:8080:8080"`) in the docker port mapping, the API will be available to everyone connecting to the server under the correct port, so others may be able to control your bot.
|
||||||
|
This **may** be safe if you're running the bot in a secure environment (like your home network), but it's not recommended to expose the API to the internet.
|
||||||
|
|
||||||
## Rest API
|
## Rest API
|
||||||
|
|
||||||
|
@ -454,7 +455,7 @@ To properly configure your reverse proxy (securely), please consult it's documen
|
||||||
- **Caddy**: Caddy v2 supports websockets out of the box, see the [documentation](https://caddyserver.com/docs/v2-upgrade#proxy)
|
- **Caddy**: Caddy v2 supports websockets out of the box, see the [documentation](https://caddyserver.com/docs/v2-upgrade#proxy)
|
||||||
|
|
||||||
!!! Tip "SSL certificates"
|
!!! Tip "SSL certificates"
|
||||||
You can use tools like certbot to setup ssl certificates to access your bot's UI through encrypted connection by using any fo the above reverse proxies.
|
You can use tools like certbot to setup ssl certificates to access your bot's UI through encrypted connection by using any of the above reverse proxies.
|
||||||
While this will protect your data in transit, we do not recommend to run the freqtrade API outside of your private network (VPN, SSH tunnel).
|
While this will protect your data in transit, we do not recommend to run the freqtrade API outside of your private network (VPN, SSH tunnel).
|
||||||
|
|
||||||
### OpenAPI interface
|
### OpenAPI interface
|
||||||
|
|
|
@ -240,7 +240,7 @@ When using leverage, the same principle is applied - with stoploss defining the
|
||||||
|
|
||||||
Therefore, a stoploss of 10% on a 10x trade would trigger on a 1% price move.
|
Therefore, a stoploss of 10% on a 10x trade would trigger on a 1% price move.
|
||||||
If your stake amount (own capital) was 100$ - this trade would be 1000$ at 10x (after leverage).
|
If your stake amount (own capital) was 100$ - this trade would be 1000$ at 10x (after leverage).
|
||||||
If price moves 1% - you've lost 10$ of your own capital - therfore stoploss will trigger in this case.
|
If price moves 1% - you've lost 10$ of your own capital - therefore stoploss will trigger in this case.
|
||||||
|
|
||||||
Make sure to be aware of this, and avoid using too tight stoploss (at 10x leverage, 10% risk may be too little to allow the trade to "breath" a little).
|
Make sure to be aware of this, and avoid using too tight stoploss (at 10x leverage, 10% risk may be too little to allow the trade to "breath" a little).
|
||||||
|
|
||||||
|
|
|
@ -326,4 +326,4 @@ for val in self.buy_ema_short.range:
|
||||||
dataframe = pd.concat(frames, axis=1)
|
dataframe = pd.concat(frames, axis=1)
|
||||||
```
|
```
|
||||||
|
|
||||||
Freqtrade does however also counter this by running `dataframe.copy()` on the dataframe right after the `populate_indicators()` method - so performance implications of this should be low to non-existant.
|
Freqtrade does however also counter this by running `dataframe.copy()` on the dataframe right after the `populate_indicators()` method - so performance implications of this should be low to non-existent.
|
||||||
|
|
|
@ -551,8 +551,8 @@ for more information.
|
||||||
|
|
||||||
# Define BTC/STAKE informative pair. A custom formatter may be specified for formatting
|
# Define BTC/STAKE informative pair. A custom formatter may be specified for formatting
|
||||||
# column names. A callable `fmt(**kwargs) -> str` may be specified, to implement custom
|
# column names. A callable `fmt(**kwargs) -> str` may be specified, to implement custom
|
||||||
# formatting. Available in populate_indicators and other methods as 'rsi_upper'.
|
# formatting. Available in populate_indicators and other methods as 'rsi_upper_1h'.
|
||||||
@informative('1h', 'BTC/{stake}', '{column}')
|
@informative('1h', 'BTC/{stake}', '{column}_{timeframe}')
|
||||||
def populate_indicators_btc_1h_2(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
def populate_indicators_btc_1h_2(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
|
||||||
dataframe['rsi_upper'] = ta.RSI(dataframe, timeperiod=14)
|
dataframe['rsi_upper'] = ta.RSI(dataframe, timeperiod=14)
|
||||||
return dataframe
|
return dataframe
|
||||||
|
@ -776,7 +776,7 @@ The orderbook structure is aligned with the order structure from [ccxt](https://
|
||||||
Therefore, using `ob['bids'][0][0]` as demonstrated above will result in using the best bid price. `ob['bids'][0][1]` would look at the amount at this orderbook position.
|
Therefore, using `ob['bids'][0][0]` as demonstrated above will result in using the best bid price. `ob['bids'][0][1]` would look at the amount at this orderbook position.
|
||||||
|
|
||||||
!!! Warning "Warning about backtesting"
|
!!! Warning "Warning about backtesting"
|
||||||
The order book is not part of the historic data which means backtesting and hyperopt will not work correctly if this method is used, as the method will return uptodate values.
|
The order book is not part of the historic data which means backtesting and hyperopt will not work correctly if this method is used, as the method will return up-to-date values.
|
||||||
|
|
||||||
### *ticker(pair)*
|
### *ticker(pair)*
|
||||||
|
|
||||||
|
|
|
@ -126,7 +126,7 @@ An `Order` object will always be tied to it's corresponding [`Trade`](#trade-obj
|
||||||
### Order - Available attributes
|
### Order - Available attributes
|
||||||
|
|
||||||
an Order object is typically attached to a trade.
|
an Order object is typically attached to a trade.
|
||||||
Most properties here can be None as they are dependant on the exchange response.
|
Most properties here can be None as they are dependent on the exchange response.
|
||||||
|
|
||||||
| Attribute | DataType | Description |
|
| Attribute | DataType | Description |
|
||||||
|------------|-------------|-------------|
|
|------------|-------------|-------------|
|
||||||
|
@ -141,7 +141,7 @@ Most properties here can be None as they are dependant on the exchange response.
|
||||||
`amount` | float | Amount in base currency
|
`amount` | float | Amount in base currency
|
||||||
`filled` | float | Filled amount (in base currency)
|
`filled` | float | Filled amount (in base currency)
|
||||||
`remaining` | float | Remaining amount
|
`remaining` | float | Remaining amount
|
||||||
`cost` | float | Cost of the order - usually average * filled (*Exchange dependant on futures, may contain the cost with or without leverage and may be in contracts.*)
|
`cost` | float | Cost of the order - usually average * filled (*Exchange dependent on futures, may contain the cost with or without leverage and may be in contracts.*)
|
||||||
`stake_amount` | float | Stake amount used for this order. *Added in 2023.7.*
|
`stake_amount` | float | Stake amount used for this order. *Added in 2023.7.*
|
||||||
`order_date` | datetime | Order creation date **use `order_date_utc` instead**
|
`order_date` | datetime | Order creation date **use `order_date_utc` instead**
|
||||||
`order_date_utc` | datetime | Order creation date (in UTC)
|
`order_date_utc` | datetime | Order creation date (in UTC)
|
||||||
|
|
|
@ -16,6 +16,10 @@ from freqtrade.util import render_template, render_template_with_fallback
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
# Timeout for requests
|
||||||
|
req_timeout = 30
|
||||||
|
|
||||||
|
|
||||||
def start_create_userdir(args: Dict[str, Any]) -> None:
|
def start_create_userdir(args: Dict[str, Any]) -> None:
|
||||||
"""
|
"""
|
||||||
Create "user_data" directory to contain user data strategies, hyperopt, ...)
|
Create "user_data" directory to contain user data strategies, hyperopt, ...)
|
||||||
|
@ -119,7 +123,7 @@ def download_and_install_ui(dest_folder: Path, dl_url: str, version: str):
|
||||||
from zipfile import ZipFile
|
from zipfile import ZipFile
|
||||||
|
|
||||||
logger.info(f"Downloading {dl_url}")
|
logger.info(f"Downloading {dl_url}")
|
||||||
resp = requests.get(dl_url).content
|
resp = requests.get(dl_url, timeout=req_timeout).content
|
||||||
dest_folder.mkdir(parents=True, exist_ok=True)
|
dest_folder.mkdir(parents=True, exist_ok=True)
|
||||||
with ZipFile(BytesIO(resp)) as zf:
|
with ZipFile(BytesIO(resp)) as zf:
|
||||||
for fn in zf.filelist:
|
for fn in zf.filelist:
|
||||||
|
@ -137,7 +141,7 @@ def get_ui_download_url(version: Optional[str] = None) -> Tuple[str, str]:
|
||||||
base_url = 'https://api.github.com/repos/freqtrade/frequi/'
|
base_url = 'https://api.github.com/repos/freqtrade/frequi/'
|
||||||
# Get base UI Repo path
|
# Get base UI Repo path
|
||||||
|
|
||||||
resp = requests.get(f"{base_url}releases")
|
resp = requests.get(f"{base_url}releases", timeout=req_timeout)
|
||||||
resp.raise_for_status()
|
resp.raise_for_status()
|
||||||
r = resp.json()
|
r = resp.json()
|
||||||
|
|
||||||
|
@ -158,7 +162,7 @@ def get_ui_download_url(version: Optional[str] = None) -> Tuple[str, str]:
|
||||||
# URL not found - try assets url
|
# URL not found - try assets url
|
||||||
if not dl_url:
|
if not dl_url:
|
||||||
assets = r[0]['assets_url']
|
assets = r[0]['assets_url']
|
||||||
resp = requests.get(assets)
|
resp = requests.get(assets, timeout=req_timeout)
|
||||||
r = resp.json()
|
r = resp.json()
|
||||||
dl_url = r[0]['browser_download_url']
|
dl_url = r[0]['browser_download_url']
|
||||||
|
|
||||||
|
|
|
@ -202,7 +202,7 @@ class Configuration:
|
||||||
|
|
||||||
if self.args.get('show_sensitive'):
|
if self.args.get('show_sensitive'):
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"Sensitive information will be shown in the upcomming output. "
|
"Sensitive information will be shown in the upcoming output. "
|
||||||
"Please make sure to never share this output without redacting "
|
"Please make sure to never share this output without redacting "
|
||||||
"the information yourself.")
|
"the information yourself.")
|
||||||
|
|
||||||
|
|
|
@ -238,6 +238,16 @@ def update_backtest_metadata(filename: Path, strategy: str, content: Dict[str, A
|
||||||
file_dump_json(get_backtest_metadata_filename(filename), metadata)
|
file_dump_json(get_backtest_metadata_filename(filename), metadata)
|
||||||
|
|
||||||
|
|
||||||
|
def get_backtest_market_change(filename: Path, include_ts: bool = True) -> pd.DataFrame:
|
||||||
|
"""
|
||||||
|
Read backtest market change file.
|
||||||
|
"""
|
||||||
|
df = pd.read_feather(filename)
|
||||||
|
if include_ts:
|
||||||
|
df.loc[:, '__date_ts'] = df.loc[:, 'date'].astype(np.int64) // 1000 // 1000
|
||||||
|
return df
|
||||||
|
|
||||||
|
|
||||||
def find_existing_backtest_stats(dirname: Union[Path, str], run_ids: Dict[str, str],
|
def find_existing_backtest_stats(dirname: Union[Path, str], run_ids: Dict[str, str],
|
||||||
min_backtest_date: Optional[datetime] = None) -> Dict[str, Any]:
|
min_backtest_date: Optional[datetime] = None) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -523,7 +523,7 @@ class DataProvider:
|
||||||
Send custom RPC Notifications from your bot.
|
Send custom RPC Notifications from your bot.
|
||||||
Will not send any bot in modes other than Dry-run or Live.
|
Will not send any bot in modes other than Dry-run or Live.
|
||||||
:param message: Message to be sent. Must be below 4096.
|
:param message: Message to be sent. Must be below 4096.
|
||||||
:param always_send: If False, will send the message only once per candle, and surpress
|
:param always_send: If False, will send the message only once per candle, and suppress
|
||||||
identical messages.
|
identical messages.
|
||||||
Careful as this can end up spaming your chat.
|
Careful as this can end up spaming your chat.
|
||||||
Defaults to False
|
Defaults to False
|
||||||
|
|
|
@ -302,8 +302,8 @@ class IDataHandler(ABC):
|
||||||
Rebuild pair name from filename
|
Rebuild pair name from filename
|
||||||
Assumes a asset name of max. 7 length to also support BTC-PERP and BTC-PERP:USD names.
|
Assumes a asset name of max. 7 length to also support BTC-PERP and BTC-PERP:USD names.
|
||||||
"""
|
"""
|
||||||
res = re.sub(r'^(([A-Za-z\d]{1,10})|^([A-Za-z\-]{1,6}))(_)', r'\g<1>/', pair, 1)
|
res = re.sub(r'^(([A-Za-z\d]{1,10})|^([A-Za-z\-]{1,6}))(_)', r'\g<1>/', pair, count=1)
|
||||||
res = re.sub('_', ':', res, 1)
|
res = re.sub('_', ':', res, count=1)
|
||||||
return res
|
return res
|
||||||
|
|
||||||
def ohlcv_load(self, pair, timeframe: str,
|
def ohlcv_load(self, pair, timeframe: str,
|
||||||
|
|
|
@ -30,8 +30,25 @@ def calculate_market_change(data: Dict[str, pd.DataFrame], column: str = "close"
|
||||||
return float(np.mean(tmp_means))
|
return float(np.mean(tmp_means))
|
||||||
|
|
||||||
|
|
||||||
def combine_dataframes_with_mean(data: Dict[str, pd.DataFrame],
|
def combine_dataframes_by_column(
|
||||||
column: str = "close") -> pd.DataFrame:
|
data: Dict[str, pd.DataFrame], column: str = "close") -> pd.DataFrame:
|
||||||
|
"""
|
||||||
|
Combine multiple dataframes "column"
|
||||||
|
:param data: Dict of Dataframes, dict key should be pair.
|
||||||
|
:param column: Column in the original dataframes to use
|
||||||
|
:return: DataFrame with the column renamed to the dict key.
|
||||||
|
:raise: ValueError if no data is provided.
|
||||||
|
"""
|
||||||
|
if not data:
|
||||||
|
raise ValueError("No data provided.")
|
||||||
|
df_comb = pd.concat([data[pair].set_index('date').rename(
|
||||||
|
{column: pair}, axis=1)[pair] for pair in data], axis=1)
|
||||||
|
return df_comb
|
||||||
|
|
||||||
|
|
||||||
|
def combined_dataframes_with_rel_mean(
|
||||||
|
data: Dict[str, pd.DataFrame], fromdt: datetime, todt: datetime,
|
||||||
|
column: str = "close") -> pd.DataFrame:
|
||||||
"""
|
"""
|
||||||
Combine multiple dataframes "column"
|
Combine multiple dataframes "column"
|
||||||
:param data: Dict of Dataframes, dict key should be pair.
|
:param data: Dict of Dataframes, dict key should be pair.
|
||||||
|
@ -40,8 +57,26 @@ def combine_dataframes_with_mean(data: Dict[str, pd.DataFrame],
|
||||||
named mean, containing the mean of all pairs.
|
named mean, containing the mean of all pairs.
|
||||||
:raise: ValueError if no data is provided.
|
:raise: ValueError if no data is provided.
|
||||||
"""
|
"""
|
||||||
df_comb = pd.concat([data[pair].set_index('date').rename(
|
df_comb = combine_dataframes_by_column(data, column)
|
||||||
{column: pair}, axis=1)[pair] for pair in data], axis=1)
|
# Trim dataframes to the given timeframe
|
||||||
|
df_comb = df_comb.iloc[(df_comb.index >= fromdt) & (df_comb.index < todt)]
|
||||||
|
df_comb['count'] = df_comb.count(axis=1)
|
||||||
|
df_comb['mean'] = df_comb.mean(axis=1)
|
||||||
|
df_comb['rel_mean'] = df_comb['mean'].pct_change().fillna(0).cumsum()
|
||||||
|
return df_comb[['mean', 'rel_mean', 'count']]
|
||||||
|
|
||||||
|
|
||||||
|
def combine_dataframes_with_mean(
|
||||||
|
data: Dict[str, pd.DataFrame], column: str = "close") -> pd.DataFrame:
|
||||||
|
"""
|
||||||
|
Combine multiple dataframes "column"
|
||||||
|
:param data: Dict of Dataframes, dict key should be pair.
|
||||||
|
:param column: Column in the original dataframes to use
|
||||||
|
:return: DataFrame with the column renamed to the dict key, and a column
|
||||||
|
named mean, containing the mean of all pairs.
|
||||||
|
:raise: ValueError if no data is provided.
|
||||||
|
"""
|
||||||
|
df_comb = combine_dataframes_by_column(data, column)
|
||||||
|
|
||||||
df_comb['mean'] = df_comb.mean(axis=1)
|
df_comb['mean'] = df_comb.mean(axis=1)
|
||||||
|
|
||||||
|
|
|
@ -25,6 +25,7 @@ from freqtrade.exchange.exchange_utils_timeframe import (timeframe_to_minutes, t
|
||||||
from freqtrade.exchange.gate import Gate
|
from freqtrade.exchange.gate import Gate
|
||||||
from freqtrade.exchange.hitbtc import Hitbtc
|
from freqtrade.exchange.hitbtc import Hitbtc
|
||||||
from freqtrade.exchange.htx import Htx
|
from freqtrade.exchange.htx import Htx
|
||||||
|
from freqtrade.exchange.idex import Idex
|
||||||
from freqtrade.exchange.kraken import Kraken
|
from freqtrade.exchange.kraken import Kraken
|
||||||
from freqtrade.exchange.kucoin import Kucoin
|
from freqtrade.exchange.kucoin import Kucoin
|
||||||
from freqtrade.exchange.okx import Okx
|
from freqtrade.exchange.okx import Okx
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -239,7 +239,7 @@ class Bybit(Exchange):
|
||||||
|
|
||||||
return orders
|
return orders
|
||||||
|
|
||||||
def fetch_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def fetch_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
order = super().fetch_order(order_id, pair, params)
|
order = super().fetch_order(order_id, pair, params)
|
||||||
if (
|
if (
|
||||||
order.get('status') == 'canceled'
|
order.get('status') == 'canceled'
|
||||||
|
|
|
@ -44,7 +44,7 @@ from freqtrade.misc import (chunks, deep_merge_dicts, file_dump_json, file_load_
|
||||||
safe_value_fallback2)
|
safe_value_fallback2)
|
||||||
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
|
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
|
||||||
from freqtrade.util import dt_from_ts, dt_now
|
from freqtrade.util import dt_from_ts, dt_now
|
||||||
from freqtrade.util.datetime_helpers import dt_humanize, dt_ts
|
from freqtrade.util.datetime_helpers import dt_humanize_delta, dt_ts
|
||||||
from freqtrade.util.periodic_cache import PeriodicCache
|
from freqtrade.util.periodic_cache import PeriodicCache
|
||||||
|
|
||||||
|
|
||||||
|
@ -239,8 +239,8 @@ class Exchange:
|
||||||
self.validate_pricing(config['exit_pricing'])
|
self.validate_pricing(config['exit_pricing'])
|
||||||
self.validate_pricing(config['entry_pricing'])
|
self.validate_pricing(config['entry_pricing'])
|
||||||
|
|
||||||
def _init_ccxt(self, exchange_config: Dict[str, Any], ccxt_module: CcxtModuleType = ccxt,
|
def _init_ccxt(self, exchange_config: Dict[str, Any], ccxt_module: CcxtModuleType = ccxt, *,
|
||||||
ccxt_kwargs: Dict = {}) -> ccxt.Exchange:
|
ccxt_kwargs: Dict) -> ccxt.Exchange:
|
||||||
"""
|
"""
|
||||||
Initialize ccxt with given config and return valid
|
Initialize ccxt with given config and return valid
|
||||||
ccxt instance.
|
ccxt instance.
|
||||||
|
@ -348,10 +348,13 @@ class Exchange:
|
||||||
return int(self._ft_has.get('ohlcv_candle_limit_per_timeframe', {}).get(
|
return int(self._ft_has.get('ohlcv_candle_limit_per_timeframe', {}).get(
|
||||||
timeframe, self._ft_has.get('ohlcv_candle_limit')))
|
timeframe, self._ft_has.get('ohlcv_candle_limit')))
|
||||||
|
|
||||||
def get_markets(self, base_currencies: List[str] = [], quote_currencies: List[str] = [],
|
def get_markets(
|
||||||
spot_only: bool = False, margin_only: bool = False, futures_only: bool = False,
|
self,
|
||||||
tradable_only: bool = True,
|
base_currencies: Optional[List[str]] = None,
|
||||||
active_only: bool = False) -> Dict[str, Any]:
|
quote_currencies: Optional[List[str]] = None,
|
||||||
|
spot_only: bool = False, margin_only: bool = False, futures_only: bool = False,
|
||||||
|
tradable_only: bool = True,
|
||||||
|
active_only: bool = False) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
Return exchange ccxt markets, filtered out by base currency and quote currency
|
Return exchange ccxt markets, filtered out by base currency and quote currency
|
||||||
if this was requested in parameters.
|
if this was requested in parameters.
|
||||||
|
@ -758,7 +761,7 @@ class Exchange:
|
||||||
|
|
||||||
def price_get_one_pip(self, pair: str, price: float) -> float:
|
def price_get_one_pip(self, pair: str, price: float) -> float:
|
||||||
"""
|
"""
|
||||||
Get's the "1 pip" value for this pair.
|
Gets the "1 pip" value for this pair.
|
||||||
Used in PriceFilter to calculate the 1pip movements.
|
Used in PriceFilter to calculate the 1pip movements.
|
||||||
"""
|
"""
|
||||||
precision = self.markets[pair]['precision']['price']
|
precision = self.markets[pair]['precision']['price']
|
||||||
|
@ -848,7 +851,7 @@ class Exchange:
|
||||||
# Dry-run methods
|
# Dry-run methods
|
||||||
|
|
||||||
def create_dry_run_order(self, pair: str, ordertype: str, side: str, amount: float,
|
def create_dry_run_order(self, pair: str, ordertype: str, side: str, amount: float,
|
||||||
rate: float, leverage: float, params: Dict = {},
|
rate: float, leverage: float, params: Optional[Dict] = None,
|
||||||
stop_loss: bool = False) -> Dict[str, Any]:
|
stop_loss: bool = False) -> Dict[str, Any]:
|
||||||
now = dt_now()
|
now = dt_now()
|
||||||
order_id = f'dry_run_{side}_{pair}_{now.timestamp()}'
|
order_id = f'dry_run_{side}_{pair}_{now.timestamp()}'
|
||||||
|
@ -1297,9 +1300,11 @@ class Exchange:
|
||||||
raise OperationalException(e) from e
|
raise OperationalException(e) from e
|
||||||
|
|
||||||
@retrier(retries=API_FETCH_ORDER_RETRY_COUNT)
|
@retrier(retries=API_FETCH_ORDER_RETRY_COUNT)
|
||||||
def fetch_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def fetch_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
if self._config['dry_run']:
|
if self._config['dry_run']:
|
||||||
return self.fetch_dry_run_order(order_id)
|
return self.fetch_dry_run_order(order_id)
|
||||||
|
if params is None:
|
||||||
|
params = {}
|
||||||
try:
|
try:
|
||||||
if not self.exchange_has('fetchOrder'):
|
if not self.exchange_has('fetchOrder'):
|
||||||
return self.fetch_order_emulated(order_id, pair, params)
|
return self.fetch_order_emulated(order_id, pair, params)
|
||||||
|
@ -1321,7 +1326,7 @@ class Exchange:
|
||||||
except ccxt.BaseError as e:
|
except ccxt.BaseError as e:
|
||||||
raise OperationalException(e) from e
|
raise OperationalException(e) from e
|
||||||
|
|
||||||
def fetch_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def fetch_stoploss_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
return self.fetch_order(order_id, pair, params)
|
return self.fetch_order(order_id, pair, params)
|
||||||
|
|
||||||
def fetch_order_or_stoploss_order(self, order_id: str, pair: str,
|
def fetch_order_or_stoploss_order(self, order_id: str, pair: str,
|
||||||
|
@ -1347,7 +1352,7 @@ class Exchange:
|
||||||
and order.get('filled') == 0.0)
|
and order.get('filled') == 0.0)
|
||||||
|
|
||||||
@retrier
|
@retrier
|
||||||
def cancel_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def cancel_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
if self._config['dry_run']:
|
if self._config['dry_run']:
|
||||||
try:
|
try:
|
||||||
order = self.fetch_dry_run_order(order_id)
|
order = self.fetch_dry_run_order(order_id)
|
||||||
|
@ -1357,6 +1362,8 @@ class Exchange:
|
||||||
except InvalidOrderException:
|
except InvalidOrderException:
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
if params is None:
|
||||||
|
params = {}
|
||||||
try:
|
try:
|
||||||
order = self._api.cancel_order(order_id, pair, params=params)
|
order = self._api.cancel_order(order_id, pair, params=params)
|
||||||
self._log_exchange_response('cancel_order', order)
|
self._log_exchange_response('cancel_order', order)
|
||||||
|
@ -1373,7 +1380,8 @@ class Exchange:
|
||||||
except ccxt.BaseError as e:
|
except ccxt.BaseError as e:
|
||||||
raise OperationalException(e) from e
|
raise OperationalException(e) from e
|
||||||
|
|
||||||
def cancel_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def cancel_stoploss_order(
|
||||||
|
self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
return self.cancel_order(order_id, pair, params)
|
return self.cancel_order(order_id, pair, params)
|
||||||
|
|
||||||
def is_cancel_order_result_suitable(self, corder) -> bool:
|
def is_cancel_order_result_suitable(self, corder) -> bool:
|
||||||
|
@ -2000,14 +2008,14 @@ class Exchange:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"one_call: %s msecs (%s)",
|
"one_call: %s msecs (%s)",
|
||||||
one_call,
|
one_call,
|
||||||
dt_humanize(dt_now() - timedelta(milliseconds=one_call), only_distance=True)
|
dt_humanize_delta(dt_now() - timedelta(milliseconds=one_call))
|
||||||
)
|
)
|
||||||
input_coroutines = [self._async_get_candle_history(
|
input_coroutines = [self._async_get_candle_history(
|
||||||
pair, timeframe, candle_type, since) for since in
|
pair, timeframe, candle_type, since) for since in
|
||||||
range(since_ms, until_ms or dt_ts(), one_call)]
|
range(since_ms, until_ms or dt_ts(), one_call)]
|
||||||
|
|
||||||
data: List = []
|
data: List = []
|
||||||
# Chunk requests into batches of 100 to avoid overwelming ccxt Throttling
|
# Chunk requests into batches of 100 to avoid overwhelming ccxt Throttling
|
||||||
for input_coro in chunks(input_coroutines, 100):
|
for input_coro in chunks(input_coroutines, 100):
|
||||||
|
|
||||||
results = await asyncio.gather(*input_coro, return_exceptions=True)
|
results = await asyncio.gather(*input_coro, return_exceptions=True)
|
||||||
|
@ -2124,7 +2132,7 @@ class Exchange:
|
||||||
Only used in the dataprovider.refresh() method.
|
Only used in the dataprovider.refresh() method.
|
||||||
:param pair_list: List of 2 element tuples containing pair, interval to refresh
|
:param pair_list: List of 2 element tuples containing pair, interval to refresh
|
||||||
:param since_ms: time since when to download, in milliseconds
|
:param since_ms: time since when to download, in milliseconds
|
||||||
:param cache: Assign result to _klines. Usefull for one-off downloads like for pairlists
|
:param cache: Assign result to _klines. Useful for one-off downloads like for pairlists
|
||||||
:param drop_incomplete: Control candle dropping.
|
:param drop_incomplete: Control candle dropping.
|
||||||
Specifying None defaults to _ohlcv_partial_candle
|
Specifying None defaults to _ohlcv_partial_candle
|
||||||
:return: Dict of [{(pair, timeframe): Dataframe}]
|
:return: Dict of [{(pair, timeframe): Dataframe}]
|
||||||
|
@ -2135,7 +2143,7 @@ class Exchange:
|
||||||
input_coroutines, cached_pairs = self._build_ohlcv_dl_jobs(pair_list, since_ms, cache)
|
input_coroutines, cached_pairs = self._build_ohlcv_dl_jobs(pair_list, since_ms, cache)
|
||||||
|
|
||||||
results_df = {}
|
results_df = {}
|
||||||
# Chunk requests into batches of 100 to avoid overwelming ccxt Throttling
|
# Chunk requests into batches of 100 to avoid overwhelming ccxt Throttling
|
||||||
for input_coro in chunks(input_coroutines, 100):
|
for input_coro in chunks(input_coroutines, 100):
|
||||||
async def gather_stuff():
|
async def gather_stuff():
|
||||||
return await asyncio.gather(*input_coro, return_exceptions=True)
|
return await asyncio.gather(*input_coro, return_exceptions=True)
|
||||||
|
@ -2295,7 +2303,7 @@ class Exchange:
|
||||||
since: Optional[int] = None,
|
since: Optional[int] = None,
|
||||||
params: Optional[dict] = None) -> Tuple[List[List], Any]:
|
params: Optional[dict] = None) -> Tuple[List[List], Any]:
|
||||||
"""
|
"""
|
||||||
Asyncronously gets trade history using fetch_trades.
|
Asynchronously gets trade history using fetch_trades.
|
||||||
Handles exchange errors, does one call to the exchange.
|
Handles exchange errors, does one call to the exchange.
|
||||||
:param pair: Pair to fetch trade data for
|
:param pair: Pair to fetch trade data for
|
||||||
:param since: Since as integer timestamp in milliseconds
|
:param since: Since as integer timestamp in milliseconds
|
||||||
|
@ -2352,7 +2360,7 @@ class Exchange:
|
||||||
since: Optional[int] = None,
|
since: Optional[int] = None,
|
||||||
from_id: Optional[str] = None) -> Tuple[str, List[List]]:
|
from_id: Optional[str] = None) -> Tuple[str, List[List]]:
|
||||||
"""
|
"""
|
||||||
Asyncronously gets trade history using fetch_trades
|
Asynchronously gets trade history using fetch_trades
|
||||||
use this when exchange uses id-based iteration (check `self._trades_pagination`)
|
use this when exchange uses id-based iteration (check `self._trades_pagination`)
|
||||||
:param pair: Pair to fetch trade data for
|
:param pair: Pair to fetch trade data for
|
||||||
:param since: Since as integer timestamp in milliseconds
|
:param since: Since as integer timestamp in milliseconds
|
||||||
|
@ -2403,7 +2411,7 @@ class Exchange:
|
||||||
async def _async_get_trade_history_time(self, pair: str, until: int,
|
async def _async_get_trade_history_time(self, pair: str, until: int,
|
||||||
since: Optional[int] = None) -> Tuple[str, List[List]]:
|
since: Optional[int] = None) -> Tuple[str, List[List]]:
|
||||||
"""
|
"""
|
||||||
Asyncronously gets trade history using fetch_trades,
|
Asynchronously gets trade history using fetch_trades,
|
||||||
when the exchange uses time-based iteration (check `self._trades_pagination`)
|
when the exchange uses time-based iteration (check `self._trades_pagination`)
|
||||||
:param pair: Pair to fetch trade data for
|
:param pair: Pair to fetch trade data for
|
||||||
:param since: Since as integer timestamp in milliseconds
|
:param since: Since as integer timestamp in milliseconds
|
||||||
|
@ -2786,7 +2794,7 @@ class Exchange:
|
||||||
|
|
||||||
@retrier
|
@retrier
|
||||||
def set_margin_mode(self, pair: str, margin_mode: MarginMode, accept_fail: bool = False,
|
def set_margin_mode(self, pair: str, margin_mode: MarginMode, accept_fail: bool = False,
|
||||||
params: dict = {}):
|
params: Optional[Dict] = None):
|
||||||
"""
|
"""
|
||||||
Set's the margin mode on the exchange to cross or isolated for a specific pair
|
Set's the margin mode on the exchange to cross or isolated for a specific pair
|
||||||
:param pair: base/quote currency pair (e.g. "ADA/USDT")
|
:param pair: base/quote currency pair (e.g. "ADA/USDT")
|
||||||
|
@ -2795,6 +2803,8 @@ class Exchange:
|
||||||
# Some exchanges only support one margin_mode type
|
# Some exchanges only support one margin_mode type
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if params is None:
|
||||||
|
params = {}
|
||||||
try:
|
try:
|
||||||
res = self._api.set_margin_mode(margin_mode.value, pair, params)
|
res = self._api.set_margin_mode(margin_mode.value, pair, params)
|
||||||
self._log_exchange_response('set_margin_mode', res)
|
self._log_exchange_response('set_margin_mode', res)
|
||||||
|
|
|
@ -79,7 +79,7 @@ class Gate(Exchange):
|
||||||
# As such, futures orders on gate will not contain a fee, which causes
|
# As such, futures orders on gate will not contain a fee, which causes
|
||||||
# a repeated "update fee" cycle and wrong calculations.
|
# a repeated "update fee" cycle and wrong calculations.
|
||||||
# Therefore we patch the response with fees if it's not available.
|
# Therefore we patch the response with fees if it's not available.
|
||||||
# An alternative also contianing fees would be
|
# An alternative also containing fees would be
|
||||||
# privateFuturesGetSettleAccountBook({"settle": "usdt"})
|
# privateFuturesGetSettleAccountBook({"settle": "usdt"})
|
||||||
pair_fees = self._trading_fees.get(pair, {})
|
pair_fees = self._trading_fees.get(pair, {})
|
||||||
if pair_fees:
|
if pair_fees:
|
||||||
|
@ -98,7 +98,7 @@ class Gate(Exchange):
|
||||||
def get_order_id_conditional(self, order: Dict[str, Any]) -> str:
|
def get_order_id_conditional(self, order: Dict[str, Any]) -> str:
|
||||||
return safe_value_fallback2(order, order, 'id_stop', 'id')
|
return safe_value_fallback2(order, order, 'id_stop', 'id')
|
||||||
|
|
||||||
def fetch_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def fetch_stoploss_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
order = self.fetch_order(
|
order = self.fetch_order(
|
||||||
order_id=order_id,
|
order_id=order_id,
|
||||||
pair=pair,
|
pair=pair,
|
||||||
|
@ -119,7 +119,8 @@ class Gate(Exchange):
|
||||||
return order1
|
return order1
|
||||||
return order
|
return order
|
||||||
|
|
||||||
def cancel_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def cancel_stoploss_order(
|
||||||
|
self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
return self.cancel_order(
|
return self.cancel_order(
|
||||||
order_id=order_id,
|
order_id=order_id,
|
||||||
pair=pair,
|
pair=pair,
|
||||||
|
|
19
freqtrade/exchange/idex.py
Normal file
19
freqtrade/exchange/idex.py
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
""" Idex exchange subclass """
|
||||||
|
import logging
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
from freqtrade.exchange import Exchange
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Idex(Exchange):
|
||||||
|
"""
|
||||||
|
Idex exchange class. Contains adjustments needed for Freqtrade to work
|
||||||
|
with this exchange.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_ft_has: Dict = {
|
||||||
|
"ohlcv_candle_limit": 1000,
|
||||||
|
}
|
|
@ -56,7 +56,7 @@ class Okx(Exchange):
|
||||||
"""
|
"""
|
||||||
Exchange ohlcv candle limit
|
Exchange ohlcv candle limit
|
||||||
OKX has the following behaviour:
|
OKX has the following behaviour:
|
||||||
* 300 candles for uptodate data
|
* 300 candles for up-to-date data
|
||||||
* 100 candles for historic data
|
* 100 candles for historic data
|
||||||
* 100 candles for additional candles (not futures or spot).
|
* 100 candles for additional candles (not futures or spot).
|
||||||
:param timeframe: Timeframe to check
|
:param timeframe: Timeframe to check
|
||||||
|
@ -202,7 +202,7 @@ class Okx(Exchange):
|
||||||
order['type'] = 'stoploss'
|
order['type'] = 'stoploss'
|
||||||
return order
|
return order
|
||||||
|
|
||||||
def fetch_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def fetch_stoploss_order(self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
if self._config['dry_run']:
|
if self._config['dry_run']:
|
||||||
return self.fetch_dry_run_order(order_id)
|
return self.fetch_dry_run_order(order_id)
|
||||||
|
|
||||||
|
@ -232,7 +232,8 @@ class Okx(Exchange):
|
||||||
return safe_value_fallback2(order, order, 'id_stop', 'id')
|
return safe_value_fallback2(order, order, 'id_stop', 'id')
|
||||||
return order['id']
|
return order['id']
|
||||||
|
|
||||||
def cancel_stoploss_order(self, order_id: str, pair: str, params: Dict = {}) -> Dict:
|
def cancel_stoploss_order(
|
||||||
|
self, order_id: str, pair: str, params: Optional[Dict] = None) -> Dict:
|
||||||
params1 = {'stop': True}
|
params1 = {'stop': True}
|
||||||
# 'ordType': 'conditional'
|
# 'ordType': 'conditional'
|
||||||
#
|
#
|
||||||
|
|
|
@ -222,7 +222,7 @@ class BaseEnvironment(gym.Env):
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def step(self, action: int):
|
def step(self, action: int):
|
||||||
"""
|
"""
|
||||||
Step depeneds on action types, this must be inherited.
|
Step depends on action types, this must be inherited.
|
||||||
"""
|
"""
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -326,7 +326,7 @@ class BaseEnvironment(gym.Env):
|
||||||
|
|
||||||
def _update_unrealized_total_profit(self):
|
def _update_unrealized_total_profit(self):
|
||||||
"""
|
"""
|
||||||
Update the unrealized total profit incase of episode end.
|
Update the unrealized total profit in case of episode end.
|
||||||
"""
|
"""
|
||||||
if self._position in (Positions.Long, Positions.Short):
|
if self._position in (Positions.Long, Positions.Short):
|
||||||
pnl = self.get_unrealized_profit()
|
pnl = self.get_unrealized_profit()
|
||||||
|
@ -357,7 +357,7 @@ class BaseEnvironment(gym.Env):
|
||||||
"""
|
"""
|
||||||
return self.actions
|
return self.actions
|
||||||
|
|
||||||
# Keeping around incase we want to start building more complex environment
|
# Keeping around in case we want to start building more complex environment
|
||||||
# templates in the future.
|
# templates in the future.
|
||||||
# def most_recent_return(self):
|
# def most_recent_return(self):
|
||||||
# """
|
# """
|
||||||
|
|
|
@ -311,7 +311,7 @@ class BaseReinforcementLearningModel(IFreqaiModel):
|
||||||
if not prices_train_old.empty:
|
if not prices_train_old.empty:
|
||||||
prices_train = prices_train_old
|
prices_train = prices_train_old
|
||||||
rename_dict = rename_dict_old
|
rename_dict = rename_dict_old
|
||||||
logger.warning('Reinforcement learning module didnt find the correct raw prices '
|
logger.warning('Reinforcement learning module didn\'t find the correct raw prices '
|
||||||
'assigned in feature_engineering_standard(). '
|
'assigned in feature_engineering_standard(). '
|
||||||
'Please assign them with:\n'
|
'Please assign them with:\n'
|
||||||
'dataframe["%-raw_close"] = dataframe["close"]\n'
|
'dataframe["%-raw_close"] = dataframe["close"]\n'
|
||||||
|
@ -458,7 +458,7 @@ def make_env(MyRLEnv: Type[BaseEnvironment], env_id: str, rank: int,
|
||||||
|
|
||||||
:param env_id: (str) the environment ID
|
:param env_id: (str) the environment ID
|
||||||
:param num_env: (int) the number of environment you wish to have in subprocesses
|
:param num_env: (int) the number of environment you wish to have in subprocesses
|
||||||
:param seed: (int) the inital seed for RNG
|
:param seed: (int) the initial seed for RNG
|
||||||
:param rank: (int) index of the subprocess
|
:param rank: (int) index of the subprocess
|
||||||
:param env_info: (dict) all required arguments to instantiate the environment.
|
:param env_info: (dict) all required arguments to instantiate the environment.
|
||||||
:return: (Callable)
|
:return: (Callable)
|
||||||
|
|
|
@ -280,7 +280,7 @@ class FreqaiDataDrawer:
|
||||||
|
|
||||||
new_pred = pred_df.copy()
|
new_pred = pred_df.copy()
|
||||||
# set new_pred values to nans (we want to signal to user that there was nothing
|
# set new_pred values to nans (we want to signal to user that there was nothing
|
||||||
# historically made during downtime. The newest pred will get appeneded later in
|
# historically made during downtime. The newest pred will get appended later in
|
||||||
# append_model_predictions)
|
# append_model_predictions)
|
||||||
|
|
||||||
new_pred["date_pred"] = dataframe["date"]
|
new_pred["date_pred"] = dataframe["date"]
|
||||||
|
|
|
@ -612,7 +612,7 @@ class FreqaiDataKitchen:
|
||||||
pairs = self.freqai_config["feature_parameters"].get("include_corr_pairlist", [])
|
pairs = self.freqai_config["feature_parameters"].get("include_corr_pairlist", [])
|
||||||
|
|
||||||
for pair in pairs:
|
for pair in pairs:
|
||||||
pair = pair.replace(':', '') # lightgbm doesnt like colons
|
pair = pair.replace(':', '') # lightgbm does not like colons
|
||||||
pair_cols = [col for col in dataframe.columns if col.startswith("%")
|
pair_cols = [col for col in dataframe.columns if col.startswith("%")
|
||||||
and f"{pair}_" in col]
|
and f"{pair}_" in col]
|
||||||
|
|
||||||
|
@ -638,7 +638,7 @@ class FreqaiDataKitchen:
|
||||||
pairs = self.freqai_config["feature_parameters"].get("include_corr_pairlist", [])
|
pairs = self.freqai_config["feature_parameters"].get("include_corr_pairlist", [])
|
||||||
current_pair = current_pair.replace(':', '')
|
current_pair = current_pair.replace(':', '')
|
||||||
for pair in pairs:
|
for pair in pairs:
|
||||||
pair = pair.replace(':', '') # lightgbm doesnt work with colons
|
pair = pair.replace(':', '') # lightgbm does not work with colons
|
||||||
if current_pair != pair:
|
if current_pair != pair:
|
||||||
dataframe = dataframe.merge(corr_dataframes[pair], how='left', on='date')
|
dataframe = dataframe.merge(corr_dataframes[pair], how='left', on='date')
|
||||||
|
|
||||||
|
@ -841,7 +841,7 @@ class FreqaiDataKitchen:
|
||||||
f = spy.stats.norm.fit(self.data_dictionary["train_labels"][label])
|
f = spy.stats.norm.fit(self.data_dictionary["train_labels"][label])
|
||||||
self.data["labels_mean"][label], self.data["labels_std"][label] = f[0], f[1]
|
self.data["labels_mean"][label], self.data["labels_std"][label] = f[0], f[1]
|
||||||
|
|
||||||
# incase targets are classifications
|
# in case targets are classifications
|
||||||
for label in self.unique_class_list:
|
for label in self.unique_class_list:
|
||||||
self.data["labels_mean"][label], self.data["labels_std"][label] = 0, 0
|
self.data["labels_mean"][label], self.data["labels_std"][label] = 0, 0
|
||||||
|
|
||||||
|
|
|
@ -221,7 +221,7 @@ class IFreqaiModel(ABC):
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
pair = self.train_queue[0]
|
pair = self.train_queue[0]
|
||||||
|
|
||||||
# ensure pair is avaialble in dp
|
# ensure pair is available in dp
|
||||||
if pair not in strategy.dp.current_whitelist():
|
if pair not in strategy.dp.current_whitelist():
|
||||||
self.train_queue.popleft()
|
self.train_queue.popleft()
|
||||||
logger.warning(f'{pair} not in current whitelist, removing from train queue.')
|
logger.warning(f'{pair} not in current whitelist, removing from train queue.')
|
||||||
|
@ -722,9 +722,6 @@ class IFreqaiModel(ABC):
|
||||||
if self.pair_it == self.total_pairs:
|
if self.pair_it == self.total_pairs:
|
||||||
logger.info(
|
logger.info(
|
||||||
f'Total time spent inferencing pairlist {self.inference_time:.2f} seconds')
|
f'Total time spent inferencing pairlist {self.inference_time:.2f} seconds')
|
||||||
if self.inference_time > 0.25 * self.base_tf_seconds:
|
|
||||||
logger.warning("Inference took over 25% of the candle time. Reduce pairlist to"
|
|
||||||
" avoid blinding open trades and degrading performance.")
|
|
||||||
self.pair_it = 0
|
self.pair_it = 0
|
||||||
self.inference_time = 0
|
self.inference_time = 0
|
||||||
return
|
return
|
||||||
|
|
|
@ -74,7 +74,7 @@ class PyTorchMLPClassifier(BasePyTorchClassifier):
|
||||||
model.to(self.device)
|
model.to(self.device)
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
||||||
criterion = torch.nn.CrossEntropyLoss()
|
criterion = torch.nn.CrossEntropyLoss()
|
||||||
# check if continual_learning is activated, and retreive the model to continue training
|
# check if continual_learning is activated, and retrieve the model to continue training
|
||||||
trainer = self.get_init_model(dk.pair)
|
trainer = self.get_init_model(dk.pair)
|
||||||
if trainer is None:
|
if trainer is None:
|
||||||
trainer = PyTorchModelTrainer(
|
trainer = PyTorchModelTrainer(
|
||||||
|
|
|
@ -69,7 +69,7 @@ class PyTorchMLPRegressor(BasePyTorchRegressor):
|
||||||
model.to(self.device)
|
model.to(self.device)
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
||||||
criterion = torch.nn.MSELoss()
|
criterion = torch.nn.MSELoss()
|
||||||
# check if continual_learning is activated, and retreive the model to continue training
|
# check if continual_learning is activated, and retrieve the model to continue training
|
||||||
trainer = self.get_init_model(dk.pair)
|
trainer = self.get_init_model(dk.pair)
|
||||||
if trainer is None:
|
if trainer is None:
|
||||||
trainer = PyTorchModelTrainer(
|
trainer = PyTorchModelTrainer(
|
||||||
|
|
|
@ -80,7 +80,7 @@ class PyTorchTransformerRegressor(BasePyTorchRegressor):
|
||||||
model.to(self.device)
|
model.to(self.device)
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
optimizer = torch.optim.AdamW(model.parameters(), lr=self.learning_rate)
|
||||||
criterion = torch.nn.MSELoss()
|
criterion = torch.nn.MSELoss()
|
||||||
# check if continual_learning is activated, and retreive the model to continue training
|
# check if continual_learning is activated, and retrieve the model to continue training
|
||||||
trainer = self.get_init_model(dk.pair)
|
trainer = self.get_init_model(dk.pair)
|
||||||
if trainer is None:
|
if trainer is None:
|
||||||
trainer = PyTorchTransformerTrainer(
|
trainer = PyTorchTransformerTrainer(
|
||||||
|
|
|
@ -63,6 +63,6 @@ class ReinforcementLearner_multiproc(ReinforcementLearner):
|
||||||
is_masking_supported(self.eval_env)))
|
is_masking_supported(self.eval_env)))
|
||||||
|
|
||||||
# TENSORBOARD CALLBACK DOES NOT RECOMMENDED TO USE WITH MULTIPLE ENVS,
|
# TENSORBOARD CALLBACK DOES NOT RECOMMENDED TO USE WITH MULTIPLE ENVS,
|
||||||
# IT WILL RETURN FALSE INFORMATIONS, NEVERTHLESS NOT THREAD SAFE WITH SB3!!!
|
# IT WILL RETURN FALSE INFORMATION, NEVERTHELESS NOT THREAD SAFE WITH SB3!!!
|
||||||
actions = self.train_env.env_method("get_actions")[0]
|
actions = self.train_env.env_method("get_actions")[0]
|
||||||
self.tensorboard_callback = TensorboardCallback(verbose=1, actions=actions)
|
self.tensorboard_callback = TensorboardCallback(verbose=1, actions=actions)
|
||||||
|
|
|
@ -38,7 +38,7 @@ class PyTorchModelTrainer(PyTorchTrainerInterface):
|
||||||
:param init_model: A dictionary containing the initial model/optimizer
|
:param init_model: A dictionary containing the initial model/optimizer
|
||||||
state_dict and model_meta_data saved by self.save() method.
|
state_dict and model_meta_data saved by self.save() method.
|
||||||
:param model_meta_data: Additional metadata about the model (optional).
|
:param model_meta_data: Additional metadata about the model (optional).
|
||||||
:param data_convertor: convertor from pd.DataFrame to torch.tensor.
|
:param data_convertor: converter from pd.DataFrame to torch.tensor.
|
||||||
:param n_steps: used to calculate n_epochs. The number of training iterations to run.
|
:param n_steps: used to calculate n_epochs. The number of training iterations to run.
|
||||||
iteration here refers to the number of times optimizer.step() is called.
|
iteration here refers to the number of times optimizer.step() is called.
|
||||||
ignored if n_epochs is set.
|
ignored if n_epochs is set.
|
||||||
|
|
|
@ -178,7 +178,7 @@ def record_params(config: Dict[str, Any], full_path: Path) -> None:
|
||||||
|
|
||||||
def get_timerange_backtest_live_models(config: Config) -> str:
|
def get_timerange_backtest_live_models(config: Config) -> str:
|
||||||
"""
|
"""
|
||||||
Returns a formated timerange for backtest live/ready models
|
Returns a formatted timerange for backtest live/ready models
|
||||||
:param config: Configuration dictionary
|
:param config: Configuration dictionary
|
||||||
|
|
||||||
:return: a string timerange (format example: '20220801-20220822')
|
:return: a string timerange (format example: '20220801-20220822')
|
||||||
|
|
|
@ -37,6 +37,7 @@ from freqtrade.rpc.rpc_types import (ProfitLossStr, RPCCancelMsg, RPCEntryMsg, R
|
||||||
RPCExitMsg, RPCProtectionMsg)
|
RPCExitMsg, RPCProtectionMsg)
|
||||||
from freqtrade.strategy.interface import IStrategy
|
from freqtrade.strategy.interface import IStrategy
|
||||||
from freqtrade.strategy.strategy_wrapper import strategy_safe_wrapper
|
from freqtrade.strategy.strategy_wrapper import strategy_safe_wrapper
|
||||||
|
from freqtrade.util import MeasureTime
|
||||||
from freqtrade.util.migrations import migrate_binance_futures_names
|
from freqtrade.util.migrations import migrate_binance_futures_names
|
||||||
from freqtrade.wallets import Wallets
|
from freqtrade.wallets import Wallets
|
||||||
|
|
||||||
|
@ -64,7 +65,7 @@ class FreqtradeBot(LoggingMixin):
|
||||||
# Init objects
|
# Init objects
|
||||||
self.config = config
|
self.config = config
|
||||||
exchange_config: ExchangeConfig = deepcopy(config['exchange'])
|
exchange_config: ExchangeConfig = deepcopy(config['exchange'])
|
||||||
# Remove credentials from original exchange config to avoid accidental credentail exposure
|
# Remove credentials from original exchange config to avoid accidental credential exposure
|
||||||
remove_exchange_credentials(config['exchange'], True)
|
remove_exchange_credentials(config['exchange'], True)
|
||||||
|
|
||||||
self.strategy: IStrategy = StrategyResolver.load_strategy(self.config)
|
self.strategy: IStrategy = StrategyResolver.load_strategy(self.config)
|
||||||
|
@ -117,7 +118,8 @@ class FreqtradeBot(LoggingMixin):
|
||||||
|
|
||||||
# Protect exit-logic from forcesell and vice versa
|
# Protect exit-logic from forcesell and vice versa
|
||||||
self._exit_lock = Lock()
|
self._exit_lock = Lock()
|
||||||
LoggingMixin.__init__(self, logger, timeframe_to_seconds(self.strategy.timeframe))
|
timeframe_secs = timeframe_to_seconds(self.strategy.timeframe)
|
||||||
|
LoggingMixin.__init__(self, logger, timeframe_secs)
|
||||||
|
|
||||||
self._schedule = Scheduler()
|
self._schedule = Scheduler()
|
||||||
|
|
||||||
|
@ -139,6 +141,16 @@ class FreqtradeBot(LoggingMixin):
|
||||||
# Initialize protections AFTER bot start - otherwise parameters are not loaded.
|
# Initialize protections AFTER bot start - otherwise parameters are not loaded.
|
||||||
self.protections = ProtectionManager(self.config, self.strategy.protections)
|
self.protections = ProtectionManager(self.config, self.strategy.protections)
|
||||||
|
|
||||||
|
def log_took_too_long(duration: float, time_limit: float):
|
||||||
|
logger.warning(
|
||||||
|
f"Strategy analysis took {duration:.2f}, which is 25% of the timeframe. "
|
||||||
|
"This can lead to delayed orders and missed signals."
|
||||||
|
"Consider either reducing the amount of work your strategy performs "
|
||||||
|
"or reduce the amount of pairs in the Pairlist."
|
||||||
|
)
|
||||||
|
|
||||||
|
self._measure_execution = MeasureTime(log_took_too_long, timeframe_secs * 0.25)
|
||||||
|
|
||||||
def notify_status(self, msg: str, msg_type=RPCMessageType.STATUS) -> None:
|
def notify_status(self, msg: str, msg_type=RPCMessageType.STATUS) -> None:
|
||||||
"""
|
"""
|
||||||
Public method for users of this class (worker, etc.) to send notifications
|
Public method for users of this class (worker, etc.) to send notifications
|
||||||
|
@ -175,7 +187,7 @@ class FreqtradeBot(LoggingMixin):
|
||||||
try:
|
try:
|
||||||
Trade.commit()
|
Trade.commit()
|
||||||
except Exception:
|
except Exception:
|
||||||
# Exeptions here will be happening if the db disappeared.
|
# Exceptions here will be happening if the db disappeared.
|
||||||
# At which point we can no longer commit anyway.
|
# At which point we can no longer commit anyway.
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -223,10 +235,11 @@ class FreqtradeBot(LoggingMixin):
|
||||||
strategy_safe_wrapper(self.strategy.bot_loop_start, supress_error=True)(
|
strategy_safe_wrapper(self.strategy.bot_loop_start, supress_error=True)(
|
||||||
current_time=datetime.now(timezone.utc))
|
current_time=datetime.now(timezone.utc))
|
||||||
|
|
||||||
self.strategy.analyze(self.active_pair_whitelist)
|
with self._measure_execution:
|
||||||
|
self.strategy.analyze(self.active_pair_whitelist)
|
||||||
|
|
||||||
with self._exit_lock:
|
with self._exit_lock:
|
||||||
# Check for exchange cancelations, timeouts and user requested replace
|
# Check for exchange cancellations, timeouts and user requested replace
|
||||||
self.manage_open_orders()
|
self.manage_open_orders()
|
||||||
|
|
||||||
# Protect from collisions with force_exit.
|
# Protect from collisions with force_exit.
|
||||||
|
@ -277,7 +290,7 @@ class FreqtradeBot(LoggingMixin):
|
||||||
}
|
}
|
||||||
self.rpc.send_msg(msg)
|
self.rpc.send_msg(msg)
|
||||||
|
|
||||||
def _refresh_active_whitelist(self, trades: List[Trade] = []) -> List[str]:
|
def _refresh_active_whitelist(self, trades: Optional[List[Trade]] = None) -> List[str]:
|
||||||
"""
|
"""
|
||||||
Refresh active whitelist from pairlist or edge and extend it with
|
Refresh active whitelist from pairlist or edge and extend it with
|
||||||
pairs that have open trades.
|
pairs that have open trades.
|
||||||
|
@ -449,6 +462,7 @@ class FreqtradeBot(LoggingMixin):
|
||||||
trade.pair, trade.open_date_utc - timedelta(seconds=10))
|
trade.pair, trade.open_date_utc - timedelta(seconds=10))
|
||||||
prev_exit_reason = trade.exit_reason
|
prev_exit_reason = trade.exit_reason
|
||||||
prev_trade_state = trade.is_open
|
prev_trade_state = trade.is_open
|
||||||
|
prev_trade_amount = trade.amount
|
||||||
for order in orders:
|
for order in orders:
|
||||||
trade_order = [o for o in trade.orders if o.order_id == order['id']]
|
trade_order = [o for o in trade.orders if o.order_id == order['id']]
|
||||||
|
|
||||||
|
@ -480,6 +494,26 @@ class FreqtradeBot(LoggingMixin):
|
||||||
send_msg=prev_trade_state != trade.is_open)
|
send_msg=prev_trade_state != trade.is_open)
|
||||||
else:
|
else:
|
||||||
trade.exit_reason = prev_exit_reason
|
trade.exit_reason = prev_exit_reason
|
||||||
|
total = self.wallets.get_total(trade.base_currency)
|
||||||
|
if total < trade.amount:
|
||||||
|
if total > trade.amount * 0.98:
|
||||||
|
logger.warning(
|
||||||
|
f"{trade} has a total of {trade.amount} {trade.base_currency}, "
|
||||||
|
f"but the Wallet shows a total of {total} {trade.base_currency}. "
|
||||||
|
f"Adjusting trade amount to {total}."
|
||||||
|
"This may however lead to further issues."
|
||||||
|
)
|
||||||
|
trade.amount = total
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"{trade} has a total of {trade.amount} {trade.base_currency}, "
|
||||||
|
f"but the Wallet shows a total of {total} {trade.base_currency}. "
|
||||||
|
"Refusing to adjust as the difference is too large."
|
||||||
|
"This may however lead to further issues."
|
||||||
|
)
|
||||||
|
if prev_trade_amount != trade.amount:
|
||||||
|
# Cancel stoploss on exchange if the amount changed
|
||||||
|
trade = self.cancel_stoploss_on_exchange(trade)
|
||||||
Trade.commit()
|
Trade.commit()
|
||||||
|
|
||||||
except ExchangeError:
|
except ExchangeError:
|
||||||
|
@ -1290,12 +1324,12 @@ class FreqtradeBot(LoggingMixin):
|
||||||
|
|
||||||
def manage_trade_stoploss_orders(self, trade: Trade, stoploss_orders: List[Dict]):
|
def manage_trade_stoploss_orders(self, trade: Trade, stoploss_orders: List[Dict]):
|
||||||
"""
|
"""
|
||||||
Perform required actions acording to existing stoploss orders of trade
|
Perform required actions according to existing stoploss orders of trade
|
||||||
:param trade: Corresponding Trade
|
:param trade: Corresponding Trade
|
||||||
:param stoploss_orders: Current on exchange stoploss orders
|
:param stoploss_orders: Current on exchange stoploss orders
|
||||||
:return: None
|
:return: None
|
||||||
"""
|
"""
|
||||||
# If all stoploss orderd are canceled for some reason we add it again
|
# If all stoploss ordered are canceled for some reason we add it again
|
||||||
canceled_sl_orders = [o for o in stoploss_orders
|
canceled_sl_orders = [o for o in stoploss_orders
|
||||||
if o['status'] in ('canceled', 'cancelled')]
|
if o['status'] in ('canceled', 'cancelled')]
|
||||||
if (
|
if (
|
||||||
|
@ -1935,21 +1969,23 @@ class FreqtradeBot(LoggingMixin):
|
||||||
|
|
||||||
trade.update_trade(order_obj, not send_msg)
|
trade.update_trade(order_obj, not send_msg)
|
||||||
|
|
||||||
trade = self._update_trade_after_fill(trade, order_obj)
|
trade = self._update_trade_after_fill(trade, order_obj, send_msg)
|
||||||
Trade.commit()
|
Trade.commit()
|
||||||
|
|
||||||
self.order_close_notify(trade, order_obj, stoploss_order, send_msg)
|
self.order_close_notify(trade, order_obj, stoploss_order, send_msg)
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _update_trade_after_fill(self, trade: Trade, order: Order) -> Trade:
|
def _update_trade_after_fill(self, trade: Trade, order: Order, send_msg: bool) -> Trade:
|
||||||
if order.status in constants.NON_OPEN_EXCHANGE_STATES:
|
if order.status in constants.NON_OPEN_EXCHANGE_STATES:
|
||||||
strategy_safe_wrapper(
|
strategy_safe_wrapper(
|
||||||
self.strategy.order_filled, default_retval=None)(
|
self.strategy.order_filled, default_retval=None)(
|
||||||
pair=trade.pair, trade=trade, order=order, current_time=datetime.now(timezone.utc))
|
pair=trade.pair, trade=trade, order=order, current_time=datetime.now(timezone.utc))
|
||||||
# If a entry order was closed, force update on stoploss on exchange
|
# If a entry order was closed, force update on stoploss on exchange
|
||||||
if order.ft_order_side == trade.entry_side:
|
if order.ft_order_side == trade.entry_side:
|
||||||
trade = self.cancel_stoploss_on_exchange(trade)
|
if send_msg:
|
||||||
|
# Don't cancel stoploss in recovery modes immediately
|
||||||
|
trade = self.cancel_stoploss_on_exchange(trade)
|
||||||
if not self.edge:
|
if not self.edge:
|
||||||
# TODO: should shorting/leverage be supported by Edge,
|
# TODO: should shorting/leverage be supported by Edge,
|
||||||
# then this will need to be fixed.
|
# then this will need to be fixed.
|
||||||
|
|
|
@ -19,6 +19,7 @@ from freqtrade.data import history
|
||||||
from freqtrade.data.btanalysis import find_existing_backtest_stats, trade_list_to_dataframe
|
from freqtrade.data.btanalysis import find_existing_backtest_stats, trade_list_to_dataframe
|
||||||
from freqtrade.data.converter import trim_dataframe, trim_dataframes
|
from freqtrade.data.converter import trim_dataframe, trim_dataframes
|
||||||
from freqtrade.data.dataprovider import DataProvider
|
from freqtrade.data.dataprovider import DataProvider
|
||||||
|
from freqtrade.data.metrics import combined_dataframes_with_rel_mean
|
||||||
from freqtrade.enums import (BacktestState, CandleType, ExitCheckTuple, ExitType, RunMode,
|
from freqtrade.enums import (BacktestState, CandleType, ExitCheckTuple, ExitType, RunMode,
|
||||||
TradingMode)
|
TradingMode)
|
||||||
from freqtrade.exceptions import DependencyException, OperationalException
|
from freqtrade.exceptions import DependencyException, OperationalException
|
||||||
|
@ -296,7 +297,7 @@ class Backtesting:
|
||||||
candle_type=CandleType.FUNDING_RATE
|
candle_type=CandleType.FUNDING_RATE
|
||||||
)
|
)
|
||||||
|
|
||||||
# For simplicity, assign to CandleType.Mark (might contian index candles!)
|
# For simplicity, assign to CandleType.Mark (might contain index candles!)
|
||||||
mark_rates_dict = history.load_data(
|
mark_rates_dict = history.load_data(
|
||||||
datadir=self.config['datadir'],
|
datadir=self.config['datadir'],
|
||||||
pairs=self.pairlists.whitelist,
|
pairs=self.pairlists.whitelist,
|
||||||
|
@ -1216,7 +1217,7 @@ class Backtesting:
|
||||||
:return: DataFrame with trades (results of backtesting)
|
:return: DataFrame with trades (results of backtesting)
|
||||||
"""
|
"""
|
||||||
self.prepare_backtest(self.enable_protections)
|
self.prepare_backtest(self.enable_protections)
|
||||||
# Ensure wallets are uptodate (important for --strategy-list)
|
# Ensure wallets are up-to-date (important for --strategy-list)
|
||||||
self.wallets.update()
|
self.wallets.update()
|
||||||
# Use dict of lists with data for performance
|
# Use dict of lists with data for performance
|
||||||
# (looping lists is a lot faster than pandas DataFrames)
|
# (looping lists is a lot faster than pandas DataFrames)
|
||||||
|
@ -1394,7 +1395,7 @@ class Backtesting:
|
||||||
"""
|
"""
|
||||||
Run backtesting end-to-end
|
Run backtesting end-to-end
|
||||||
"""
|
"""
|
||||||
data: Dict[str, Any] = {}
|
data: Dict[str, DataFrame] = {}
|
||||||
|
|
||||||
data, timerange = self.load_bt_data()
|
data, timerange = self.load_bt_data()
|
||||||
self.load_bt_data_detail()
|
self.load_bt_data_detail()
|
||||||
|
@ -1421,7 +1422,9 @@ class Backtesting:
|
||||||
self.results = results
|
self.results = results
|
||||||
dt_appendix = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
|
dt_appendix = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
|
||||||
if self.config.get('export', 'none') in ('trades', 'signals'):
|
if self.config.get('export', 'none') in ('trades', 'signals'):
|
||||||
store_backtest_stats(self.config['exportfilename'], self.results, dt_appendix)
|
combined_res = combined_dataframes_with_rel_mean(data, min_date, max_date)
|
||||||
|
store_backtest_stats(self.config['exportfilename'], self.results, dt_appendix,
|
||||||
|
market_change_data=combined_res)
|
||||||
|
|
||||||
if (self.config.get('export', 'none') == 'signals' and
|
if (self.config.get('export', 'none') == 'signals' and
|
||||||
self.dataprovider.runmode == RunMode.BACKTEST):
|
self.dataprovider.runmode == RunMode.BACKTEST):
|
||||||
|
|
|
@ -237,8 +237,10 @@ class HyperoptTools:
|
||||||
result_dict.update(all_space_params)
|
result_dict.update(all_space_params)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _params_pretty_print(params, space: str, header: str, non_optimized={}) -> None:
|
def _params_pretty_print(
|
||||||
if space in params or space in non_optimized:
|
params, space: str, header: str, non_optimized: Optional[Dict] = None) -> None:
|
||||||
|
|
||||||
|
if space in params or (non_optimized and space in non_optimized):
|
||||||
space_params = HyperoptTools._space_params(params, space, 5)
|
space_params = HyperoptTools._space_params(params, space, 5)
|
||||||
no_params = HyperoptTools._space_params(non_optimized, space, 5)
|
no_params = HyperoptTools._space_params(non_optimized, space, 5)
|
||||||
appendix = ''
|
appendix = ''
|
||||||
|
|
|
@ -278,7 +278,7 @@ def text_table_add_metrics(strat_results: Dict) -> str:
|
||||||
|
|
||||||
|
|
||||||
def show_backtest_result(strategy: str, results: Dict[str, Any], stake_currency: str,
|
def show_backtest_result(strategy: str, results: Dict[str, Any], stake_currency: str,
|
||||||
backtest_breakdown=[]):
|
backtest_breakdown: List[str]):
|
||||||
"""
|
"""
|
||||||
Print results for one strategy
|
Print results for one strategy
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict
|
from typing import Dict, Optional
|
||||||
|
|
||||||
|
from pandas import DataFrame
|
||||||
|
|
||||||
from freqtrade.constants import LAST_BT_RESULT_FN
|
from freqtrade.constants import LAST_BT_RESULT_FN
|
||||||
from freqtrade.misc import file_dump_joblib, file_dump_json
|
from freqtrade.misc import file_dump_joblib, file_dump_json
|
||||||
|
@ -11,8 +13,26 @@ from freqtrade.types import BacktestResultType
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def _generate_filename(recordfilename: Path, appendix: str, suffix: str) -> Path:
|
||||||
|
"""
|
||||||
|
Generates a filename based on the provided parameters.
|
||||||
|
:param recordfilename: Path object, which can either be a filename or a directory.
|
||||||
|
:param appendix: use for the filename. e.g. backtest-result-<datetime>
|
||||||
|
:param suffix: Suffix to use for the file, e.g. .json, .pkl
|
||||||
|
:return: Generated filename as a Path object
|
||||||
|
"""
|
||||||
|
if recordfilename.is_dir():
|
||||||
|
filename = (recordfilename / f'backtest-result-{appendix}').with_suffix(suffix)
|
||||||
|
else:
|
||||||
|
filename = Path.joinpath(
|
||||||
|
recordfilename.parent, f'{recordfilename.stem}-{appendix}'
|
||||||
|
).with_suffix(suffix)
|
||||||
|
return filename
|
||||||
|
|
||||||
|
|
||||||
def store_backtest_stats(
|
def store_backtest_stats(
|
||||||
recordfilename: Path, stats: BacktestResultType, dtappendix: str) -> Path:
|
recordfilename: Path, stats: BacktestResultType, dtappendix: str, *,
|
||||||
|
market_change_data: Optional[DataFrame] = None) -> Path:
|
||||||
"""
|
"""
|
||||||
Stores backtest results
|
Stores backtest results
|
||||||
:param recordfilename: Path object, which can either be a filename or a directory.
|
:param recordfilename: Path object, which can either be a filename or a directory.
|
||||||
|
@ -21,12 +41,7 @@ def store_backtest_stats(
|
||||||
:param stats: Dataframe containing the backtesting statistics
|
:param stats: Dataframe containing the backtesting statistics
|
||||||
:param dtappendix: Datetime to use for the filename
|
:param dtappendix: Datetime to use for the filename
|
||||||
"""
|
"""
|
||||||
if recordfilename.is_dir():
|
filename = _generate_filename(recordfilename, dtappendix, '.json')
|
||||||
filename = (recordfilename / f'backtest-result-{dtappendix}.json')
|
|
||||||
else:
|
|
||||||
filename = Path.joinpath(
|
|
||||||
recordfilename.parent, f'{recordfilename.stem}-{dtappendix}'
|
|
||||||
).with_suffix(recordfilename.suffix)
|
|
||||||
|
|
||||||
# Store metadata separately.
|
# Store metadata separately.
|
||||||
file_dump_json(get_backtest_metadata_filename(filename), stats['metadata'])
|
file_dump_json(get_backtest_metadata_filename(filename), stats['metadata'])
|
||||||
|
@ -41,6 +56,11 @@ def store_backtest_stats(
|
||||||
latest_filename = Path.joinpath(filename.parent, LAST_BT_RESULT_FN)
|
latest_filename = Path.joinpath(filename.parent, LAST_BT_RESULT_FN)
|
||||||
file_dump_json(latest_filename, {'latest_backtest': str(filename.name)})
|
file_dump_json(latest_filename, {'latest_backtest': str(filename.name)})
|
||||||
|
|
||||||
|
if market_change_data is not None:
|
||||||
|
filename_mc = _generate_filename(recordfilename, f"{dtappendix}_market_change", '.feather')
|
||||||
|
market_change_data.reset_index().to_feather(
|
||||||
|
filename_mc, compression_level=9, compression='lz4')
|
||||||
|
|
||||||
return filename
|
return filename
|
||||||
|
|
||||||
|
|
||||||
|
@ -57,12 +77,7 @@ def _store_backtest_analysis_data(
|
||||||
:param dtappendix: Datetime to use for the filename
|
:param dtappendix: Datetime to use for the filename
|
||||||
:param name: Name to use for the file, e.g. signals, rejected
|
:param name: Name to use for the file, e.g. signals, rejected
|
||||||
"""
|
"""
|
||||||
if recordfilename.is_dir():
|
filename = _generate_filename(recordfilename, f"{dtappendix}_{name}", '.pkl')
|
||||||
filename = (recordfilename / f'backtest-result-{dtappendix}_{name}.pkl')
|
|
||||||
else:
|
|
||||||
filename = Path.joinpath(
|
|
||||||
recordfilename.parent, f'{recordfilename.stem}-{dtappendix}_{name}.pkl'
|
|
||||||
)
|
|
||||||
|
|
||||||
file_dump_joblib(filename, data)
|
file_dump_joblib(filename, data)
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,7 @@ class _CustomData(ModelBase):
|
||||||
"""
|
"""
|
||||||
CustomData database model
|
CustomData database model
|
||||||
Keeps records of metadata as key/value store
|
Keeps records of metadata as key/value store
|
||||||
for trades or global persistant values
|
for trades or global persistent values
|
||||||
One to many relationship with Trades:
|
One to many relationship with Trades:
|
||||||
- One trade can have many metadata entries
|
- One trade can have many metadata entries
|
||||||
- One metadata entry can only be associated with one Trade
|
- One metadata entry can only be associated with one Trade
|
||||||
|
|
|
@ -847,7 +847,7 @@ class LocalTrade:
|
||||||
isclose(order.safe_amount_after_fee, amount_tr, abs_tol=MATH_CLOSE_PREC)
|
isclose(order.safe_amount_after_fee, amount_tr, abs_tol=MATH_CLOSE_PREC)
|
||||||
or (not recalculating and order.safe_amount_after_fee > amount_tr)
|
or (not recalculating and order.safe_amount_after_fee > amount_tr)
|
||||||
):
|
):
|
||||||
# When recalculating a trade, only comming out to 0 can force a close
|
# When recalculating a trade, only coming out to 0 can force a close
|
||||||
self.close(order.safe_price)
|
self.close(order.safe_price)
|
||||||
else:
|
else:
|
||||||
self.recalc_trade_from_orders()
|
self.recalc_trade_from_orders()
|
||||||
|
@ -1125,7 +1125,7 @@ class LocalTrade:
|
||||||
prof = self.calculate_profit(exit_rate, exit_amount, float(avg_price))
|
prof = self.calculate_profit(exit_rate, exit_amount, float(avg_price))
|
||||||
close_profit_abs += prof.profit_abs
|
close_profit_abs += prof.profit_abs
|
||||||
if total_stake > 0:
|
if total_stake > 0:
|
||||||
# This needs to be calculated based on the last occuring exit to be aligned
|
# This needs to be calculated based on the last occurring exit to be aligned
|
||||||
# with realized_profit.
|
# with realized_profit.
|
||||||
close_profit = (close_profit_abs / total_stake) * self.leverage
|
close_profit = (close_profit_abs / total_stake) * self.leverage
|
||||||
else:
|
else:
|
||||||
|
@ -1538,7 +1538,7 @@ class Trade(ModelBase, LocalTrade):
|
||||||
amount: Mapped[float] = mapped_column(Float()) # type: ignore
|
amount: Mapped[float] = mapped_column(Float()) # type: ignore
|
||||||
amount_requested: Mapped[Optional[float]] = mapped_column(Float()) # type: ignore
|
amount_requested: Mapped[Optional[float]] = mapped_column(Float()) # type: ignore
|
||||||
open_date: Mapped[datetime] = mapped_column(
|
open_date: Mapped[datetime] = mapped_column(
|
||||||
nullable=False, default=datetime.utcnow) # type: ignore
|
nullable=False, default=datetime.now) # type: ignore
|
||||||
close_date: Mapped[Optional[datetime]] = mapped_column() # type: ignore
|
close_date: Mapped[Optional[datetime]] = mapped_column() # type: ignore
|
||||||
# absolute value of the stop loss
|
# absolute value of the stop loss
|
||||||
stop_loss: Mapped[float] = mapped_column(Float(), nullable=True, default=0.0) # type: ignore
|
stop_loss: Mapped[float] = mapped_column(Float(), nullable=True, default=0.0) # type: ignore
|
||||||
|
|
|
@ -440,12 +440,12 @@ def create_scatter(
|
||||||
|
|
||||||
def generate_candlestick_graph(
|
def generate_candlestick_graph(
|
||||||
pair: str, data: pd.DataFrame, trades: Optional[pd.DataFrame] = None, *,
|
pair: str, data: pd.DataFrame, trades: Optional[pd.DataFrame] = None, *,
|
||||||
indicators1: List[str] = [], indicators2: List[str] = [],
|
indicators1: Optional[List[str]] = None, indicators2: Optional[List[str]] = None,
|
||||||
plot_config: Dict[str, Dict] = {},
|
plot_config: Optional[Dict[str, Dict]] = None,
|
||||||
) -> go.Figure:
|
) -> go.Figure:
|
||||||
"""
|
"""
|
||||||
Generate the graph from the data generated by Backtesting or from DB
|
Generate the graph from the data generated by Backtesting or from DB
|
||||||
Volume will always be ploted in row2, so Row 1 and 3 are to our disposal for custom indicators
|
Volume will always be plotted in row2, so Row 1 and 3 are to our disposal for custom indicators
|
||||||
:param pair: Pair to Display on the graph
|
:param pair: Pair to Display on the graph
|
||||||
:param data: OHLCV DataFrame containing indicators and entry/exit signals
|
:param data: OHLCV DataFrame containing indicators and entry/exit signals
|
||||||
:param trades: All trades created
|
:param trades: All trades created
|
||||||
|
@ -454,7 +454,11 @@ def generate_candlestick_graph(
|
||||||
:param plot_config: Dict of Dicts containing advanced plot configuration
|
:param plot_config: Dict of Dicts containing advanced plot configuration
|
||||||
:return: Plotly figure
|
:return: Plotly figure
|
||||||
"""
|
"""
|
||||||
plot_config = create_plotconfig(indicators1, indicators2, plot_config)
|
plot_config = create_plotconfig(
|
||||||
|
indicators1 or [],
|
||||||
|
indicators2 or [],
|
||||||
|
plot_config or {},
|
||||||
|
)
|
||||||
rows = 2 + len(plot_config['subplots'])
|
rows = 2 + len(plot_config['subplots'])
|
||||||
row_widths = [1 for _ in plot_config['subplots']]
|
row_widths = [1 for _ in plot_config['subplots']]
|
||||||
# Define the graph
|
# Define the graph
|
||||||
|
@ -673,7 +677,7 @@ def plot_profit(config: Config) -> None:
|
||||||
"""
|
"""
|
||||||
Plots the total profit for all pairs.
|
Plots the total profit for all pairs.
|
||||||
Note, the profit calculation isn't realistic.
|
Note, the profit calculation isn't realistic.
|
||||||
But should be somewhat proportional, and therefor useful
|
But should be somewhat proportional, and therefore useful
|
||||||
in helping out to find a good algorithm.
|
in helping out to find a good algorithm.
|
||||||
"""
|
"""
|
||||||
if 'timeframe' not in config:
|
if 'timeframe' not in config:
|
||||||
|
|
|
@ -38,7 +38,7 @@ class MarketCapPairList(IPairList):
|
||||||
self._refresh_period = self._pairlistconfig.get('refresh_period', 86400)
|
self._refresh_period = self._pairlistconfig.get('refresh_period', 86400)
|
||||||
self._marketcap_cache: TTLCache = TTLCache(maxsize=1, ttl=self._refresh_period)
|
self._marketcap_cache: TTLCache = TTLCache(maxsize=1, ttl=self._refresh_period)
|
||||||
self._def_candletype = self._config['candle_type_def']
|
self._def_candletype = self._config['candle_type_def']
|
||||||
self._coingekko: CoinGeckoAPI = CoinGeckoAPI()
|
self._coingecko: CoinGeckoAPI = CoinGeckoAPI()
|
||||||
|
|
||||||
if self._max_rank > 250:
|
if self._max_rank > 250:
|
||||||
raise OperationalException(
|
raise OperationalException(
|
||||||
|
@ -127,7 +127,7 @@ class MarketCapPairList(IPairList):
|
||||||
marketcap_list = self._marketcap_cache.get('marketcap')
|
marketcap_list = self._marketcap_cache.get('marketcap')
|
||||||
|
|
||||||
if marketcap_list is None:
|
if marketcap_list is None:
|
||||||
data = self._coingekko.get_coins_markets(vs_currency='usd', order='market_cap_desc',
|
data = self._coingecko.get_coins_markets(vs_currency='usd', order='market_cap_desc',
|
||||||
per_page='250', page='1', sparkline='false',
|
per_page='250', page='1', sparkline='false',
|
||||||
locale='en')
|
locale='en')
|
||||||
if data:
|
if data:
|
||||||
|
|
|
@ -101,7 +101,7 @@ class PriceFilter(IPairList):
|
||||||
|
|
||||||
def _validate_pair(self, pair: str, ticker: Optional[Ticker]) -> bool:
|
def _validate_pair(self, pair: str, ticker: Optional[Ticker]) -> bool:
|
||||||
"""
|
"""
|
||||||
Check if if one price-step (pip) is > than a certain barrier.
|
Check if one price-step (pip) is > than a certain barrier.
|
||||||
:param pair: Pair that's currently validated
|
:param pair: Pair that's currently validated
|
||||||
:param ticker: ticker dict as returned from ccxt.fetch_ticker
|
:param ticker: ticker dict as returned from ccxt.fetch_ticker
|
||||||
:return: True if the pair can stay, false if it should be removed
|
:return: True if the pair can stay, false if it should be removed
|
||||||
|
|
|
@ -116,7 +116,7 @@ class RemotePairList(IPairList):
|
||||||
"default": "filter",
|
"default": "filter",
|
||||||
"options": ["filter", "append"],
|
"options": ["filter", "append"],
|
||||||
"description": "Processing mode",
|
"description": "Processing mode",
|
||||||
"help": "Append pairs to incomming pairlist or filter them?",
|
"help": "Append pairs to incoming pairlist or filter them?",
|
||||||
},
|
},
|
||||||
**IPairList.refresh_period_parameter(),
|
**IPairList.refresh_period_parameter(),
|
||||||
"keep_pairlist_on_failure": {
|
"keep_pairlist_on_failure": {
|
||||||
|
|
|
@ -65,7 +65,7 @@ class VolumePairList(IPairList):
|
||||||
self._tf_in_min = timeframe_to_minutes(self._lookback_timeframe)
|
self._tf_in_min = timeframe_to_minutes(self._lookback_timeframe)
|
||||||
_tf_in_sec = self._tf_in_min * 60
|
_tf_in_sec = self._tf_in_min * 60
|
||||||
|
|
||||||
# wether to use range lookback or not
|
# whether to use range lookback or not
|
||||||
self._use_range = (self._tf_in_min > 0) & (self._lookback_period > 0)
|
self._use_range = (self._tf_in_min > 0) & (self._lookback_period > 0)
|
||||||
|
|
||||||
if self._use_range & (self._refresh_period < _tf_in_sec):
|
if self._use_range & (self._refresh_period < _tf_in_sec):
|
||||||
|
|
|
@ -110,7 +110,7 @@ class IProtection(LoggingMixin, ABC):
|
||||||
Get lock end time
|
Get lock end time
|
||||||
"""
|
"""
|
||||||
max_date: datetime = max([trade.close_date for trade in trades if trade.close_date])
|
max_date: datetime = max([trade.close_date for trade in trades if trade.close_date])
|
||||||
# comming from Database, tzinfo is not set.
|
# coming from Database, tzinfo is not set.
|
||||||
if max_date.tzinfo is None:
|
if max_date.tzinfo is None:
|
||||||
max_date = max_date.replace(tzinfo=timezone.utc)
|
max_date = max_date.replace(tzinfo=timezone.utc)
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,7 @@ class IResolver:
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def build_search_paths(cls, config: Config, user_subdir: Optional[str] = None,
|
def build_search_paths(cls, config: Config, user_subdir: Optional[str] = None,
|
||||||
extra_dirs: List[str] = []) -> List[Path]:
|
extra_dirs: Optional[List[str]] = None) -> List[Path]:
|
||||||
|
|
||||||
abs_paths: List[Path] = []
|
abs_paths: List[Path] = []
|
||||||
if cls.initial_search_path:
|
if cls.initial_search_path:
|
||||||
|
@ -57,8 +57,9 @@ class IResolver:
|
||||||
abs_paths.insert(0, config['user_data_dir'].joinpath(user_subdir))
|
abs_paths.insert(0, config['user_data_dir'].joinpath(user_subdir))
|
||||||
|
|
||||||
# Add extra directory to the top of the search paths
|
# Add extra directory to the top of the search paths
|
||||||
for dir in extra_dirs:
|
if extra_dirs:
|
||||||
abs_paths.insert(0, Path(dir).resolve())
|
for dir in extra_dirs:
|
||||||
|
abs_paths.insert(0, Path(dir).resolve())
|
||||||
|
|
||||||
if cls.extra_path and (extra := config.get(cls.extra_path)):
|
if cls.extra_path and (extra := config.get(cls.extra_path)):
|
||||||
abs_paths.insert(0, Path(extra).resolve())
|
abs_paths.insert(0, Path(extra).resolve())
|
||||||
|
@ -139,7 +140,7 @@ class IResolver:
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _load_object(cls, paths: List[Path], *, object_name: str, add_source: bool = False,
|
def _load_object(cls, paths: List[Path], *, object_name: str, add_source: bool = False,
|
||||||
kwargs: dict = {}) -> Optional[Any]:
|
kwargs: Dict) -> Optional[Any]:
|
||||||
"""
|
"""
|
||||||
Try to load object from path list.
|
Try to load object from path list.
|
||||||
"""
|
"""
|
||||||
|
@ -163,7 +164,7 @@ class IResolver:
|
||||||
def load_object(cls, object_name: str, config: Config, *, kwargs: dict,
|
def load_object(cls, object_name: str, config: Config, *, kwargs: dict,
|
||||||
extra_dir: Optional[str] = None) -> Any:
|
extra_dir: Optional[str] = None) -> Any:
|
||||||
"""
|
"""
|
||||||
Search and loads the specified object as configured in hte child class.
|
Search and loads the specified object as configured in the child class.
|
||||||
:param object_name: name of the module to import
|
:param object_name: name of the module to import
|
||||||
:param config: configuration dictionary
|
:param config: configuration dictionary
|
||||||
:param extra_dir: additional directory to search for the given pairlist
|
:param extra_dir: additional directory to search for the given pairlist
|
||||||
|
|
|
@ -26,6 +26,7 @@ def verify_auth(api_config, username: str, password: str):
|
||||||
|
|
||||||
|
|
||||||
httpbasic = HTTPBasic(auto_error=False)
|
httpbasic = HTTPBasic(auto_error=False)
|
||||||
|
security = HTTPBasic()
|
||||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
|
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
|
||||||
|
|
||||||
|
|
||||||
|
@ -117,7 +118,7 @@ def http_basic_or_jwt_token(form_data: HTTPBasicCredentials = Depends(httpbasic)
|
||||||
|
|
||||||
|
|
||||||
@router_login.post('/token/login', response_model=AccessAndRefreshToken)
|
@router_login.post('/token/login', response_model=AccessAndRefreshToken)
|
||||||
def token_login(form_data: HTTPBasicCredentials = Depends(HTTPBasic()),
|
def token_login(form_data: HTTPBasicCredentials = Depends(security),
|
||||||
api_config=Depends(get_api_config)):
|
api_config=Depends(get_api_config)):
|
||||||
|
|
||||||
if verify_auth(api_config, form_data.username, form_data.password):
|
if verify_auth(api_config, form_data.username, form_data.password):
|
||||||
|
|
|
@ -10,15 +10,16 @@ from fastapi.exceptions import HTTPException
|
||||||
|
|
||||||
from freqtrade.configuration.config_validation import validate_config_consistency
|
from freqtrade.configuration.config_validation import validate_config_consistency
|
||||||
from freqtrade.constants import Config
|
from freqtrade.constants import Config
|
||||||
from freqtrade.data.btanalysis import (delete_backtest_result, get_backtest_result,
|
from freqtrade.data.btanalysis import (delete_backtest_result, get_backtest_market_change,
|
||||||
get_backtest_resultlist, load_and_merge_backtest_result,
|
get_backtest_result, get_backtest_resultlist,
|
||||||
update_backtest_metadata)
|
load_and_merge_backtest_result, update_backtest_metadata)
|
||||||
from freqtrade.enums import BacktestState
|
from freqtrade.enums import BacktestState
|
||||||
from freqtrade.exceptions import ConfigurationError, DependencyException, OperationalException
|
from freqtrade.exceptions import ConfigurationError, DependencyException, OperationalException
|
||||||
from freqtrade.exchange.common import remove_exchange_credentials
|
from freqtrade.exchange.common import remove_exchange_credentials
|
||||||
from freqtrade.misc import deep_merge_dicts, is_file_in_dir
|
from freqtrade.misc import deep_merge_dicts, is_file_in_dir
|
||||||
from freqtrade.rpc.api_server.api_schemas import (BacktestHistoryEntry, BacktestMetadataUpdate,
|
from freqtrade.rpc.api_server.api_schemas import (BacktestHistoryEntry, BacktestMarketChange,
|
||||||
BacktestRequest, BacktestResponse)
|
BacktestMetadataUpdate, BacktestRequest,
|
||||||
|
BacktestResponse)
|
||||||
from freqtrade.rpc.api_server.deps import get_config
|
from freqtrade.rpc.api_server.deps import get_config
|
||||||
from freqtrade.rpc.api_server.webserver_bgwork import ApiBG
|
from freqtrade.rpc.api_server.webserver_bgwork import ApiBG
|
||||||
from freqtrade.rpc.rpc import RPCException
|
from freqtrade.rpc.rpc import RPCException
|
||||||
|
@ -32,8 +33,10 @@ router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
def __run_backtest_bg(btconfig: Config):
|
def __run_backtest_bg(btconfig: Config):
|
||||||
|
from freqtrade.data.metrics import combined_dataframes_with_rel_mean
|
||||||
from freqtrade.optimize.optimize_reports import generate_backtest_stats, store_backtest_stats
|
from freqtrade.optimize.optimize_reports import generate_backtest_stats, store_backtest_stats
|
||||||
from freqtrade.resolvers import StrategyResolver
|
from freqtrade.resolvers import StrategyResolver
|
||||||
|
|
||||||
asyncio.set_event_loop(asyncio.new_event_loop())
|
asyncio.set_event_loop(asyncio.new_event_loop())
|
||||||
try:
|
try:
|
||||||
# Reload strategy
|
# Reload strategy
|
||||||
|
@ -89,11 +92,14 @@ def __run_backtest_bg(btconfig: Config):
|
||||||
min_date=min_date, max_date=max_date)
|
min_date=min_date, max_date=max_date)
|
||||||
|
|
||||||
if btconfig.get('export', 'none') == 'trades':
|
if btconfig.get('export', 'none') == 'trades':
|
||||||
|
combined_res = combined_dataframes_with_rel_mean(ApiBG.bt['data'], min_date, max_date)
|
||||||
fn = store_backtest_stats(
|
fn = store_backtest_stats(
|
||||||
btconfig['exportfilename'], ApiBG.bt['bt'].results,
|
btconfig['exportfilename'],
|
||||||
datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
|
ApiBG.bt['bt'].results,
|
||||||
|
datetime.now().strftime("%Y-%m-%d_%H-%M-%S"),
|
||||||
|
market_change_data=combined_res
|
||||||
)
|
)
|
||||||
ApiBG.bt['bt'].results['metadata'][strategy_name]['filename'] = str(fn.name)
|
ApiBG.bt['bt'].results['metadata'][strategy_name]['filename'] = str(fn.stem)
|
||||||
ApiBG.bt['bt'].results['metadata'][strategy_name]['strategy'] = strategy_name
|
ApiBG.bt['bt'].results['metadata'][strategy_name]['strategy'] = strategy_name
|
||||||
|
|
||||||
logger.info("Backtest finished.")
|
logger.info("Backtest finished.")
|
||||||
|
@ -308,3 +314,20 @@ def api_update_backtest_history_entry(file: str, body: BacktestMetadataUpdate,
|
||||||
raise HTTPException(status_code=400, detail=str(e))
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
|
||||||
return get_backtest_result(file_abs)
|
return get_backtest_result(file_abs)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get('/backtest/history/{file}/market_change', response_model=BacktestMarketChange,
|
||||||
|
tags=['webserver', 'backtest'])
|
||||||
|
def api_get_backtest_market_change(file: str, config=Depends(get_config)):
|
||||||
|
bt_results_base: Path = config['user_data_dir'] / 'backtest_results'
|
||||||
|
file_abs = (bt_results_base / f"{file}_market_change").with_suffix('.feather')
|
||||||
|
# Ensure file is in backtest_results directory
|
||||||
|
if not is_file_in_dir(file_abs, bt_results_base):
|
||||||
|
raise HTTPException(status_code=404, detail="File not found.")
|
||||||
|
df = get_backtest_market_change(file_abs)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'columns': df.columns.tolist(),
|
||||||
|
'data': df.values.tolist(),
|
||||||
|
'length': len(df),
|
||||||
|
}
|
||||||
|
|
|
@ -558,6 +558,12 @@ class BacktestMetadataUpdate(BaseModel):
|
||||||
notes: str = ''
|
notes: str = ''
|
||||||
|
|
||||||
|
|
||||||
|
class BacktestMarketChange(BaseModel):
|
||||||
|
columns: List[str]
|
||||||
|
length: int
|
||||||
|
data: List[List[Any]]
|
||||||
|
|
||||||
|
|
||||||
class SysInfo(BaseModel):
|
class SysInfo(BaseModel):
|
||||||
cpu_pct: List[float]
|
cpu_pct: List[float]
|
||||||
ram_pct: float
|
ram_pct: float
|
||||||
|
|
|
@ -152,7 +152,7 @@ class WebSocketChannel:
|
||||||
"""
|
"""
|
||||||
return self._closed.is_set()
|
return self._closed.is_set()
|
||||||
|
|
||||||
def set_subscriptions(self, subscriptions: List[str] = []) -> None:
|
def set_subscriptions(self, subscriptions: List[str]) -> None:
|
||||||
"""
|
"""
|
||||||
Set which subscriptions this channel is subscribed to
|
Set which subscriptions this channel is subscribed to
|
||||||
|
|
||||||
|
|
|
@ -237,7 +237,7 @@ class ExternalMessageConsumer:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# An unforseen error has occurred, log and continue
|
# An unforeseen error has occurred, log and continue
|
||||||
logger.error("Unexpected error has occurred:")
|
logger.error("Unexpected error has occurred:")
|
||||||
logger.exception(e)
|
logger.exception(e)
|
||||||
await asyncio.sleep(self.sleep_time)
|
await asyncio.sleep(self.sleep_time)
|
||||||
|
@ -387,7 +387,7 @@ class ExternalMessageConsumer:
|
||||||
)
|
)
|
||||||
|
|
||||||
if not did_append:
|
if not did_append:
|
||||||
# We want an overlap in candles incase some data has changed
|
# We want an overlap in candles in case some data has changed
|
||||||
n_missing += 1
|
n_missing += 1
|
||||||
# Set to None for all candles if we missed a full df's worth of candles
|
# Set to None for all candles if we missed a full df's worth of candles
|
||||||
n_missing = n_missing if n_missing < FULL_DATAFRAME_THRESHOLD else 1500
|
n_missing = n_missing if n_missing < FULL_DATAFRAME_THRESHOLD else 1500
|
||||||
|
|
|
@ -39,7 +39,7 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
This object is also a Singleton
|
This object is also a Singleton
|
||||||
"""
|
"""
|
||||||
__instance = None
|
__instance = None
|
||||||
_coingekko: CoinGeckoAPI = None
|
_coingecko: CoinGeckoAPI = None
|
||||||
_coinlistings: List[Dict] = []
|
_coinlistings: List[Dict] = []
|
||||||
_backoff: float = 0.0
|
_backoff: float = 0.0
|
||||||
|
|
||||||
|
@ -52,9 +52,9 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
try:
|
try:
|
||||||
# Limit retires to 1 (0 and 1)
|
# Limit retires to 1 (0 and 1)
|
||||||
# otherwise we risk bot impact if coingecko is down.
|
# otherwise we risk bot impact if coingecko is down.
|
||||||
CryptoToFiatConverter._coingekko = CoinGeckoAPI(retries=1)
|
CryptoToFiatConverter._coingecko = CoinGeckoAPI(retries=1)
|
||||||
except BaseException:
|
except BaseException:
|
||||||
CryptoToFiatConverter._coingekko = None
|
CryptoToFiatConverter._coingecko = None
|
||||||
return CryptoToFiatConverter.__instance
|
return CryptoToFiatConverter.__instance
|
||||||
|
|
||||||
def __init__(self) -> None:
|
def __init__(self) -> None:
|
||||||
|
@ -67,7 +67,7 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
def _load_cryptomap(self) -> None:
|
def _load_cryptomap(self) -> None:
|
||||||
try:
|
try:
|
||||||
# Use list-comprehension to ensure we get a list.
|
# Use list-comprehension to ensure we get a list.
|
||||||
self._coinlistings = [x for x in self._coingekko.get_coins_list()]
|
self._coinlistings = [x for x in self._coingecko.get_coins_list()]
|
||||||
except RequestException as request_exception:
|
except RequestException as request_exception:
|
||||||
if "429" in str(request_exception):
|
if "429" in str(request_exception):
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
@ -84,7 +84,7 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
logger.error(
|
logger.error(
|
||||||
f"Could not load FIAT Cryptocurrency map for the following problem: {exception}")
|
f"Could not load FIAT Cryptocurrency map for the following problem: {exception}")
|
||||||
|
|
||||||
def _get_gekko_id(self, crypto_symbol):
|
def _get_gecko_id(self, crypto_symbol):
|
||||||
if not self._coinlistings:
|
if not self._coinlistings:
|
||||||
if self._backoff <= datetime.now().timestamp():
|
if self._backoff <= datetime.now().timestamp():
|
||||||
self._load_cryptomap()
|
self._load_cryptomap()
|
||||||
|
@ -180,9 +180,9 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
if crypto_symbol == fiat_symbol:
|
if crypto_symbol == fiat_symbol:
|
||||||
return 1.0
|
return 1.0
|
||||||
|
|
||||||
_gekko_id = self._get_gekko_id(crypto_symbol)
|
_gecko_id = self._get_gecko_id(crypto_symbol)
|
||||||
|
|
||||||
if not _gekko_id:
|
if not _gecko_id:
|
||||||
# return 0 for unsupported stake currencies (fiat-convert should not break the bot)
|
# return 0 for unsupported stake currencies (fiat-convert should not break the bot)
|
||||||
self.log_once(
|
self.log_once(
|
||||||
f"unsupported crypto-symbol {crypto_symbol.upper()} - returning 0.0",
|
f"unsupported crypto-symbol {crypto_symbol.upper()} - returning 0.0",
|
||||||
|
@ -191,10 +191,10 @@ class CryptoToFiatConverter(LoggingMixin):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return float(
|
return float(
|
||||||
self._coingekko.get_price(
|
self._coingecko.get_price(
|
||||||
ids=_gekko_id,
|
ids=_gecko_id,
|
||||||
vs_currencies=fiat_symbol
|
vs_currencies=fiat_symbol
|
||||||
)[_gekko_id][fiat_symbol]
|
)[_gecko_id][fiat_symbol]
|
||||||
)
|
)
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
logger.error("Error in _find_price: %s", exception)
|
logger.error("Error in _find_price: %s", exception)
|
||||||
|
|
|
@ -30,8 +30,8 @@ from freqtrade.persistence.models import PairLock
|
||||||
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
|
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
|
||||||
from freqtrade.rpc.fiat_convert import CryptoToFiatConverter
|
from freqtrade.rpc.fiat_convert import CryptoToFiatConverter
|
||||||
from freqtrade.rpc.rpc_types import RPCSendMsg
|
from freqtrade.rpc.rpc_types import RPCSendMsg
|
||||||
from freqtrade.util import (decimals_per_coin, dt_humanize, dt_now, dt_ts_def, format_date,
|
from freqtrade.util import decimals_per_coin, dt_now, dt_ts_def, format_date, shorten_date
|
||||||
shorten_date)
|
from freqtrade.util.datetime_helpers import dt_humanize_delta
|
||||||
from freqtrade.wallets import PositionWallet, Wallet
|
from freqtrade.wallets import PositionWallet, Wallet
|
||||||
|
|
||||||
|
|
||||||
|
@ -155,7 +155,7 @@ class RPC:
|
||||||
}
|
}
|
||||||
return val
|
return val
|
||||||
|
|
||||||
def _rpc_trade_status(self, trade_ids: List[int] = []) -> List[Dict[str, Any]]:
|
def _rpc_trade_status(self, trade_ids: Optional[List[int]] = None) -> List[Dict[str, Any]]:
|
||||||
"""
|
"""
|
||||||
Below follows the RPC backend it is prefixed with rpc_ to raise awareness that it is
|
Below follows the RPC backend it is prefixed with rpc_ to raise awareness that it is
|
||||||
a remotely exposed function
|
a remotely exposed function
|
||||||
|
@ -301,13 +301,13 @@ class RPC:
|
||||||
for oo in trade.open_orders
|
for oo in trade.open_orders
|
||||||
]
|
]
|
||||||
|
|
||||||
# exemple: '*.**.**' trying to enter, exit and exit with 3 different orders
|
# example: '*.**.**' trying to enter, exit and exit with 3 different orders
|
||||||
active_attempt_side_symbols_str = '.'.join(active_attempt_side_symbols)
|
active_attempt_side_symbols_str = '.'.join(active_attempt_side_symbols)
|
||||||
|
|
||||||
detail_trade = [
|
detail_trade = [
|
||||||
f'{trade.id} {direction_str}',
|
f'{trade.id} {direction_str}',
|
||||||
trade.pair + active_attempt_side_symbols_str,
|
trade.pair + active_attempt_side_symbols_str,
|
||||||
shorten_date(dt_humanize(trade.open_date, only_distance=True)),
|
shorten_date(dt_humanize_delta(trade.open_date_utc)),
|
||||||
profit_str
|
profit_str
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -460,8 +460,11 @@ class RPC:
|
||||||
|
|
||||||
def _rpc_trade_statistics(
|
def _rpc_trade_statistics(
|
||||||
self, stake_currency: str, fiat_display_currency: str,
|
self, stake_currency: str, fiat_display_currency: str,
|
||||||
start_date: datetime = datetime.fromtimestamp(0)) -> Dict[str, Any]:
|
start_date: Optional[datetime] = None) -> Dict[str, Any]:
|
||||||
""" Returns cumulative profit statistics """
|
""" Returns cumulative profit statistics """
|
||||||
|
|
||||||
|
start_date = datetime.fromtimestamp(0) if start_date is None else start_date
|
||||||
|
|
||||||
trade_filter = ((Trade.is_open.is_(False) & (Trade.close_date >= start_date)) |
|
trade_filter = ((Trade.is_open.is_(False) & (Trade.close_date >= start_date)) |
|
||||||
Trade.is_open.is_(True))
|
Trade.is_open.is_(True))
|
||||||
trades: Sequence[Trade] = Trade.session.scalars(Trade.get_trades_query(
|
trades: Sequence[Trade] = Trade.session.scalars(Trade.get_trades_query(
|
||||||
|
@ -596,10 +599,10 @@ class RPC:
|
||||||
'trade_count': len(trades),
|
'trade_count': len(trades),
|
||||||
'closed_trade_count': closed_trade_count,
|
'closed_trade_count': closed_trade_count,
|
||||||
'first_trade_date': format_date(first_date),
|
'first_trade_date': format_date(first_date),
|
||||||
'first_trade_humanized': dt_humanize(first_date) if first_date else '',
|
'first_trade_humanized': dt_humanize_delta(first_date) if first_date else '',
|
||||||
'first_trade_timestamp': dt_ts_def(first_date, 0),
|
'first_trade_timestamp': dt_ts_def(first_date, 0),
|
||||||
'latest_trade_date': format_date(last_date),
|
'latest_trade_date': format_date(last_date),
|
||||||
'latest_trade_humanized': dt_humanize(last_date) if last_date else '',
|
'latest_trade_humanized': dt_humanize_delta(last_date) if last_date else '',
|
||||||
'latest_trade_timestamp': dt_ts_def(last_date, 0),
|
'latest_trade_timestamp': dt_ts_def(last_date, 0),
|
||||||
'avg_duration': str(timedelta(seconds=sum(durations) / num)).split('.')[0],
|
'avg_duration': str(timedelta(seconds=sum(durations) / num)).split('.')[0],
|
||||||
'best_pair': best_pair[0] if best_pair else '',
|
'best_pair': best_pair[0] if best_pair else '',
|
||||||
|
|
|
@ -33,7 +33,7 @@ from freqtrade.misc import chunks, plural
|
||||||
from freqtrade.persistence import Trade
|
from freqtrade.persistence import Trade
|
||||||
from freqtrade.rpc import RPC, RPCException, RPCHandler
|
from freqtrade.rpc import RPC, RPCException, RPCHandler
|
||||||
from freqtrade.rpc.rpc_types import RPCEntryMsg, RPCExitMsg, RPCOrderMsg, RPCSendMsg
|
from freqtrade.rpc.rpc_types import RPCEntryMsg, RPCExitMsg, RPCOrderMsg, RPCSendMsg
|
||||||
from freqtrade.util import dt_humanize, fmt_coin, format_date, round_value
|
from freqtrade.util import dt_from_ts, dt_humanize_delta, fmt_coin, format_date, round_value
|
||||||
|
|
||||||
|
|
||||||
MAX_MESSAGE_LENGTH = MessageLimit.MAX_TEXT_LENGTH
|
MAX_MESSAGE_LENGTH = MessageLimit.MAX_TEXT_LENGTH
|
||||||
|
@ -488,7 +488,7 @@ class Telegram(RPCHandler):
|
||||||
elif msg['type'] == RPCMessageType.WARNING:
|
elif msg['type'] == RPCMessageType.WARNING:
|
||||||
message = f"\N{WARNING SIGN} *Warning:* `{msg['status']}`"
|
message = f"\N{WARNING SIGN} *Warning:* `{msg['status']}`"
|
||||||
elif msg['type'] == RPCMessageType.EXCEPTION:
|
elif msg['type'] == RPCMessageType.EXCEPTION:
|
||||||
# Errors will contain exceptions, which are wrapped in tripple ticks.
|
# Errors will contain exceptions, which are wrapped in triple ticks.
|
||||||
message = f"\N{WARNING SIGN} *ERROR:* \n {msg['status']}"
|
message = f"\N{WARNING SIGN} *ERROR:* \n {msg['status']}"
|
||||||
|
|
||||||
elif msg['type'] == RPCMessageType.STARTUP:
|
elif msg['type'] == RPCMessageType.STARTUP:
|
||||||
|
@ -573,8 +573,7 @@ class Telegram(RPCHandler):
|
||||||
# TODO: This calculation ignores fees.
|
# TODO: This calculation ignores fees.
|
||||||
price_to_1st_entry = ((cur_entry_average - first_avg) / first_avg)
|
price_to_1st_entry = ((cur_entry_average - first_avg) / first_avg)
|
||||||
if is_open:
|
if is_open:
|
||||||
lines.append("({})".format(dt_humanize(order["order_filled_date"],
|
lines.append("({})".format(dt_humanize_delta(order["order_filled_date"])))
|
||||||
granularity=["day", "hour", "minute"])))
|
|
||||||
lines.append(f"*Amount:* {round_value(cur_entry_amount, 8)} "
|
lines.append(f"*Amount:* {round_value(cur_entry_amount, 8)} "
|
||||||
f"({fmt_coin(order['cost'], quote_currency)})")
|
f"({fmt_coin(order['cost'], quote_currency)})")
|
||||||
lines.append(f"*Average {wording} Price:* {round_value(cur_entry_average, 8)} "
|
lines.append(f"*Average {wording} Price:* {round_value(cur_entry_average, 8)} "
|
||||||
|
@ -657,7 +656,7 @@ class Telegram(RPCHandler):
|
||||||
position_adjust = self._config.get('position_adjustment_enable', False)
|
position_adjust = self._config.get('position_adjustment_enable', False)
|
||||||
max_entries = self._config.get('max_entry_position_adjustment', -1)
|
max_entries = self._config.get('max_entry_position_adjustment', -1)
|
||||||
for r in results:
|
for r in results:
|
||||||
r['open_date_hum'] = dt_humanize(r['open_date'])
|
r['open_date_hum'] = dt_humanize_delta(r['open_date'])
|
||||||
r['num_entries'] = len([o for o in r['orders'] if o['ft_is_entry']])
|
r['num_entries'] = len([o for o in r['orders'] if o['ft_is_entry']])
|
||||||
r['num_exits'] = len([o for o in r['orders'] if not o['ft_is_entry']
|
r['num_exits'] = len([o for o in r['orders'] if not o['ft_is_entry']
|
||||||
and not o['ft_order_side'] == 'stoploss'])
|
and not o['ft_order_side'] == 'stoploss'])
|
||||||
|
@ -1289,7 +1288,7 @@ class Telegram(RPCHandler):
|
||||||
nrecent
|
nrecent
|
||||||
)
|
)
|
||||||
trades_tab = tabulate(
|
trades_tab = tabulate(
|
||||||
[[dt_humanize(trade['close_date']),
|
[[dt_humanize_delta(dt_from_ts(trade['close_timestamp'])),
|
||||||
trade['pair'] + " (#" + str(trade['trade_id']) + ")",
|
trade['pair'] + " (#" + str(trade['trade_id']) + ")",
|
||||||
f"{(trade['close_profit']):.2%} ({trade['close_profit_abs']})"]
|
f"{(trade['close_profit']):.2%} ({trade['close_profit_abs']})"]
|
||||||
for trade in trades['trades']],
|
for trade in trades['trades']],
|
||||||
|
@ -1549,7 +1548,7 @@ class Telegram(RPCHandler):
|
||||||
|
|
||||||
async def send_blacklist_msg(self, blacklist: Dict):
|
async def send_blacklist_msg(self, blacklist: Dict):
|
||||||
errmsgs = []
|
errmsgs = []
|
||||||
for pair, error in blacklist['errors'].items():
|
for _, error in blacklist['errors'].items():
|
||||||
errmsgs.append(f"Error: {error['error_msg']}")
|
errmsgs.append(f"Error: {error['error_msg']}")
|
||||||
if errmsgs:
|
if errmsgs:
|
||||||
await self._send_msg('\n'.join(errmsgs))
|
await self._send_msg('\n'.join(errmsgs))
|
||||||
|
|
|
@ -64,7 +64,7 @@ def informative(timeframe: str, asset: str = '',
|
||||||
def decorator(fn: PopulateIndicators):
|
def decorator(fn: PopulateIndicators):
|
||||||
informative_pairs = getattr(fn, '_ft_informative', [])
|
informative_pairs = getattr(fn, '_ft_informative', [])
|
||||||
informative_pairs.append(InformativeData(_asset, _timeframe, _fmt, _ffill, _candle_type))
|
informative_pairs.append(InformativeData(_asset, _timeframe, _fmt, _ffill, _candle_type))
|
||||||
setattr(fn, '_ft_informative', informative_pairs)
|
setattr(fn, '_ft_informative', informative_pairs) # noqa: B010
|
||||||
return fn
|
return fn
|
||||||
return decorator
|
return decorator
|
||||||
|
|
||||||
|
|
|
@ -78,7 +78,7 @@ def merge_informative_pair(dataframe: pd.DataFrame, informative: pd.DataFrame,
|
||||||
# all indicators on the informative sample MUST be calculated before this point
|
# all indicators on the informative sample MUST be calculated before this point
|
||||||
if ffill:
|
if ffill:
|
||||||
# https://pandas.pydata.org/docs/user_guide/merging.html#timeseries-friendly-merging
|
# https://pandas.pydata.org/docs/user_guide/merging.html#timeseries-friendly-merging
|
||||||
# merge_ordered - ffill method is 2.5x faster than seperate ffill()
|
# merge_ordered - ffill method is 2.5x faster than separate ffill()
|
||||||
dataframe = pd.merge_ordered(dataframe, informative, fill_method="ffill", left_on='date',
|
dataframe = pd.merge_ordered(dataframe, informative, fill_method="ffill", left_on='date',
|
||||||
right_on=date_merge, how='left')
|
right_on=date_merge, how='left')
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -3,7 +3,7 @@ def bot_loop_start(self, current_time: datetime, **kwargs) -> None:
|
||||||
"""
|
"""
|
||||||
Called at the start of the bot iteration (one loop).
|
Called at the start of the bot iteration (one loop).
|
||||||
Might be used to perform pair-independent tasks
|
Might be used to perform pair-independent tasks
|
||||||
(e.g. gather some remote ressource for comparison)
|
(e.g. gather some remote resource for comparison)
|
||||||
|
|
||||||
For full documentation please go to https://www.freqtrade.io/en/latest/strategy-advanced/
|
For full documentation please go to https://www.freqtrade.io/en/latest/strategy-advanced/
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
from freqtrade.util.datetime_helpers import (dt_floor_day, dt_from_ts, dt_humanize, dt_now, dt_ts,
|
from freqtrade.util.datetime_helpers import (dt_floor_day, dt_from_ts, dt_humanize_delta, dt_now,
|
||||||
dt_ts_def, dt_ts_none, dt_utc, format_date,
|
dt_ts, dt_ts_def, dt_ts_none, dt_utc, format_date,
|
||||||
format_ms_time, shorten_date)
|
format_ms_time, shorten_date)
|
||||||
from freqtrade.util.formatters import decimals_per_coin, fmt_coin, round_value
|
from freqtrade.util.formatters import decimals_per_coin, fmt_coin, round_value
|
||||||
from freqtrade.util.ft_precise import FtPrecise
|
from freqtrade.util.ft_precise import FtPrecise
|
||||||
|
from freqtrade.util.measure_time import MeasureTime
|
||||||
from freqtrade.util.periodic_cache import PeriodicCache
|
from freqtrade.util.periodic_cache import PeriodicCache
|
||||||
from freqtrade.util.template_renderer import render_template, render_template_with_fallback # noqa
|
from freqtrade.util.template_renderer import render_template, render_template_with_fallback # noqa
|
||||||
|
|
||||||
|
@ -10,7 +11,7 @@ from freqtrade.util.template_renderer import render_template, render_template_wi
|
||||||
__all__ = [
|
__all__ = [
|
||||||
'dt_floor_day',
|
'dt_floor_day',
|
||||||
'dt_from_ts',
|
'dt_from_ts',
|
||||||
'dt_humanize',
|
'dt_humanize_delta',
|
||||||
'dt_now',
|
'dt_now',
|
||||||
'dt_ts',
|
'dt_ts',
|
||||||
'dt_ts_def',
|
'dt_ts_def',
|
||||||
|
@ -24,4 +25,5 @@ __all__ = [
|
||||||
'decimals_per_coin',
|
'decimals_per_coin',
|
||||||
'round_value',
|
'round_value',
|
||||||
'fmt_coin',
|
'fmt_coin',
|
||||||
|
'MeasureTime',
|
||||||
]
|
]
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
import re
|
import re
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from typing import Optional
|
from typing import Optional, Union
|
||||||
|
|
||||||
import arrow
|
import humanize
|
||||||
|
|
||||||
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
from freqtrade.constants import DATETIME_PRINT_FORMAT
|
||||||
|
|
||||||
|
@ -76,13 +76,11 @@ def shorten_date(_date: str) -> str:
|
||||||
return new_date
|
return new_date
|
||||||
|
|
||||||
|
|
||||||
def dt_humanize(dt: datetime, **kwargs) -> str:
|
def dt_humanize_delta(dt: datetime):
|
||||||
"""
|
"""
|
||||||
Return a humanized string for the given datetime.
|
Return a humanized string for the given timedelta.
|
||||||
:param dt: datetime to humanize
|
|
||||||
:param kwargs: kwargs to pass to arrow's humanize()
|
|
||||||
"""
|
"""
|
||||||
return arrow.get(dt).humanize(**kwargs)
|
return humanize.naturaltime(dt)
|
||||||
|
|
||||||
|
|
||||||
def format_date(date: Optional[datetime]) -> str:
|
def format_date(date: Optional[datetime]) -> str:
|
||||||
|
@ -96,9 +94,9 @@ def format_date(date: Optional[datetime]) -> str:
|
||||||
return ''
|
return ''
|
||||||
|
|
||||||
|
|
||||||
def format_ms_time(date: int) -> str:
|
def format_ms_time(date: Union[int, float]) -> str:
|
||||||
"""
|
"""
|
||||||
convert MS date to readable format.
|
convert MS date to readable format.
|
||||||
: epoch-string in ms
|
: epoch-string in ms
|
||||||
"""
|
"""
|
||||||
return datetime.fromtimestamp(date / 1000.0).strftime('%Y-%m-%dT%H:%M:%S')
|
return dt_from_ts(date).strftime('%Y-%m-%dT%H:%M:%S')
|
||||||
|
|
43
freqtrade/util/measure_time.py
Normal file
43
freqtrade/util/measure_time.py
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
from typing import Callable
|
||||||
|
|
||||||
|
from cachetools import TTLCache
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class MeasureTime:
|
||||||
|
"""
|
||||||
|
Measure the time of a block of code and call a callback if the time limit is exceeded.
|
||||||
|
"""
|
||||||
|
def __init__(
|
||||||
|
self, callback: Callable[[float, float], None], time_limit: float, ttl: int = 3600 * 4):
|
||||||
|
"""
|
||||||
|
:param callback: The callback to call if the time limit is exceeded.
|
||||||
|
This callback will be called once every "ttl" seconds,
|
||||||
|
with the parameters "duration" (in seconds) and
|
||||||
|
"time limit" - representing the passed in time limit.
|
||||||
|
:param time_limit: The time limit in seconds.
|
||||||
|
:param ttl: The time to live of the cache in seconds.
|
||||||
|
defaults to 4 hours.
|
||||||
|
"""
|
||||||
|
self._callback = callback
|
||||||
|
self._time_limit = time_limit
|
||||||
|
self.__cache: TTLCache = TTLCache(maxsize=1, ttl=ttl)
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self._start = time.time()
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
end = time.time()
|
||||||
|
if self.__cache.get('value'):
|
||||||
|
return
|
||||||
|
duration = end - self._start
|
||||||
|
|
||||||
|
if duration < self._time_limit:
|
||||||
|
return
|
||||||
|
self._callback(duration, self._time_limit)
|
||||||
|
|
||||||
|
self.__cache['value'] = True
|
|
@ -3,7 +3,10 @@ Jinja2 rendering utils, used to generate new strategy and configurations.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def render_template(templatefile: str, arguments: dict = {}) -> str:
|
from typing import Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
def render_template(templatefile: str, arguments: Dict) -> str:
|
||||||
|
|
||||||
from jinja2 import Environment, PackageLoader, select_autoescape
|
from jinja2 import Environment, PackageLoader, select_autoescape
|
||||||
|
|
||||||
|
@ -16,11 +19,13 @@ def render_template(templatefile: str, arguments: dict = {}) -> str:
|
||||||
|
|
||||||
|
|
||||||
def render_template_with_fallback(templatefile: str, templatefallbackfile: str,
|
def render_template_with_fallback(templatefile: str, templatefallbackfile: str,
|
||||||
arguments: dict = {}) -> str:
|
arguments: Optional[Dict] = None) -> str:
|
||||||
"""
|
"""
|
||||||
Use templatefile if possible, otherwise fall back to templatefallbackfile
|
Use templatefile if possible, otherwise fall back to templatefallbackfile
|
||||||
"""
|
"""
|
||||||
from jinja2.exceptions import TemplateNotFound
|
from jinja2.exceptions import TemplateNotFound
|
||||||
|
if arguments is None:
|
||||||
|
arguments = {}
|
||||||
try:
|
try:
|
||||||
return render_template(templatefile, arguments)
|
return render_template(templatefile, arguments)
|
||||||
except TemplateNotFound:
|
except TemplateNotFound:
|
||||||
|
|
|
@ -306,7 +306,7 @@ class Wallets:
|
||||||
:raise: DependencyException if the available stake amount is too low
|
:raise: DependencyException if the available stake amount is too low
|
||||||
"""
|
"""
|
||||||
stake_amount: float
|
stake_amount: float
|
||||||
# Ensure wallets are uptodate.
|
# Ensure wallets are up-to-date.
|
||||||
if update:
|
if update:
|
||||||
self.update()
|
self.update()
|
||||||
val_tied_up = Trade.total_open_trades_stakes()
|
val_tied_up = Trade.total_open_trades_stakes()
|
||||||
|
|
|
@ -137,7 +137,7 @@ class Worker:
|
||||||
Throttles the given callable that it
|
Throttles the given callable that it
|
||||||
takes at least `min_secs` to finish execution.
|
takes at least `min_secs` to finish execution.
|
||||||
:param func: Any callable
|
:param func: Any callable
|
||||||
:param throttle_secs: throttling interation execution time limit in seconds
|
:param throttle_secs: throttling iteration execution time limit in seconds
|
||||||
:param timeframe: ensure iteration is executed at the beginning of the next candle.
|
:param timeframe: ensure iteration is executed at the beginning of the next candle.
|
||||||
:param timeframe_offset: offset in seconds to apply to the next candle time.
|
:param timeframe_offset: offset in seconds to apply to the next candle time.
|
||||||
:return: Any (result of execution of func)
|
:return: Any (result of execution of func)
|
||||||
|
|
|
@ -67,7 +67,7 @@ def print_commands():
|
||||||
# Print dynamic help for the different commands using the commands doc-strings
|
# Print dynamic help for the different commands using the commands doc-strings
|
||||||
client = FtRestClient(None)
|
client = FtRestClient(None)
|
||||||
print("Possible commands:\n")
|
print("Possible commands:\n")
|
||||||
for x, y in inspect.getmembers(client):
|
for x, _ in inspect.getmembers(client):
|
||||||
if not x.startswith('_'):
|
if not x.startswith('_'):
|
||||||
doc = re.sub(':return:.*', '', getattr(client, x).__doc__, flags=re.MULTILINE).rstrip()
|
doc = re.sub(':return:.*', '', getattr(client, x).__doc__, flags=re.MULTILINE).rstrip()
|
||||||
print(f"{x}\n\t{doc}\n")
|
print(f"{x}\n\t{doc}\n")
|
||||||
|
|
|
@ -122,6 +122,7 @@ target-version = "py38"
|
||||||
# Exclude UP036 as it's causing the "exit if < 3.9" to fail.
|
# Exclude UP036 as it's causing the "exit if < 3.9" to fail.
|
||||||
extend-select = [
|
extend-select = [
|
||||||
"C90", # mccabe
|
"C90", # mccabe
|
||||||
|
# "B", # bugbear
|
||||||
# "N", # pep8-naming
|
# "N", # pep8-naming
|
||||||
"F", # pyflakes
|
"F", # pyflakes
|
||||||
"E", # pycodestyle
|
"E", # pycodestyle
|
||||||
|
@ -129,6 +130,7 @@ extend-select = [
|
||||||
"UP", # pyupgrade
|
"UP", # pyupgrade
|
||||||
"TID", # flake8-tidy-imports
|
"TID", # flake8-tidy-imports
|
||||||
# "EXE", # flake8-executable
|
# "EXE", # flake8-executable
|
||||||
|
# "C4", # flake8-comprehensions
|
||||||
"YTT", # flake8-2020
|
"YTT", # flake8-2020
|
||||||
# "S", # flake8-bandit
|
# "S", # flake8-bandit
|
||||||
# "DTZ", # flake8-datetimez
|
# "DTZ", # flake8-datetimez
|
||||||
|
@ -141,6 +143,7 @@ extend-ignore = [
|
||||||
"E241", # Multiple spaces after comma
|
"E241", # Multiple spaces after comma
|
||||||
"E272", # Multiple spaces before keyword
|
"E272", # Multiple spaces before keyword
|
||||||
"E221", # Multiple spaces before operator
|
"E221", # Multiple spaces before operator
|
||||||
|
"B007", # Loop control variable not used
|
||||||
]
|
]
|
||||||
|
|
||||||
[tool.ruff.lint.mccabe]
|
[tool.ruff.lint.mccabe]
|
||||||
|
@ -149,6 +152,10 @@ max-complexity = 12
|
||||||
[tool.ruff.lint.per-file-ignores]
|
[tool.ruff.lint.per-file-ignores]
|
||||||
"tests/*" = ["S"]
|
"tests/*" = ["S"]
|
||||||
|
|
||||||
|
[tool.ruff.lint.flake8-bugbear]
|
||||||
|
# Allow default arguments like, e.g., `data: List[str] = fastapi.Query(None)`.
|
||||||
|
extend-immutable-calls = ["fastapi.Depends", "fastapi.Query"]
|
||||||
|
|
||||||
[tool.flake8]
|
[tool.flake8]
|
||||||
# Default from https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-ignore
|
# Default from https://flake8.pycqa.org/en/latest/user/options.html#cmdoption-flake8-ignore
|
||||||
# minus E226
|
# minus E226
|
||||||
|
@ -163,3 +170,7 @@ exclude = [
|
||||||
".venv",
|
".venv",
|
||||||
".env",
|
".env",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[tool.codespell]
|
||||||
|
ignore-words-list = "coo,fo,strat,zar,selectin"
|
||||||
|
skip="*.svg,./user_data,./freqtrade/rpc/api_server/ui/installed"
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
-r docs/requirements-docs.txt
|
-r docs/requirements-docs.txt
|
||||||
|
|
||||||
coveralls==3.3.1
|
coveralls==3.3.1
|
||||||
ruff==0.3.7
|
ruff==0.4.1
|
||||||
mypy==1.9.0
|
mypy==1.9.0
|
||||||
pre-commit==3.7.0
|
pre-commit==3.7.0
|
||||||
pytest==8.1.1
|
pytest==8.1.1
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
# Required for freqai
|
# Required for freqai
|
||||||
scikit-learn==1.4.2
|
scikit-learn==1.4.2
|
||||||
joblib==1.4.0
|
joblib==1.4.0
|
||||||
catboost==1.2.3; 'arm' not in platform_machine
|
catboost==1.2.5; 'arm' not in platform_machine
|
||||||
lightgbm==4.3.0
|
lightgbm==4.3.0
|
||||||
xgboost==2.0.3
|
xgboost==2.0.3
|
||||||
tensorboard==2.16.2
|
tensorboard==2.16.2
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Include all requirements to run the bot.
|
# Include all requirements to run the bot.
|
||||||
-r requirements.txt
|
-r requirements.txt
|
||||||
|
|
||||||
plotly==5.20.0
|
plotly==5.21.0
|
||||||
|
|
|
@ -2,14 +2,14 @@ numpy==1.26.4
|
||||||
pandas==2.2.2
|
pandas==2.2.2
|
||||||
pandas-ta==0.3.14b
|
pandas-ta==0.3.14b
|
||||||
|
|
||||||
ccxt==4.2.97
|
ccxt==4.3.4
|
||||||
cryptography==42.0.5
|
cryptography==42.0.5
|
||||||
aiohttp==3.9.4
|
aiohttp==3.9.5
|
||||||
SQLAlchemy==2.0.29
|
SQLAlchemy==2.0.29
|
||||||
python-telegram-bot==21.1
|
python-telegram-bot==21.1.1
|
||||||
# can't be hard-pinned due to telegram-bot pinning httpx with ~
|
# can't be hard-pinned due to telegram-bot pinning httpx with ~
|
||||||
httpx>=0.24.1
|
httpx>=0.24.1
|
||||||
arrow==1.3.0
|
humanize==4.9.0
|
||||||
cachetools==5.3.3
|
cachetools==5.3.3
|
||||||
requests==2.31.0
|
requests==2.31.0
|
||||||
urllib3==2.2.1
|
urllib3==2.2.1
|
||||||
|
@ -22,7 +22,7 @@ jinja2==3.1.3
|
||||||
tables==3.9.1
|
tables==3.9.1
|
||||||
joblib==1.4.0
|
joblib==1.4.0
|
||||||
rich==13.7.1
|
rich==13.7.1
|
||||||
pyarrow==15.0.2; platform_machine != 'armv7l'
|
pyarrow==16.0.0; platform_machine != 'armv7l'
|
||||||
|
|
||||||
# find first, C search in arrays
|
# find first, C search in arrays
|
||||||
py_find_1st==1.1.6
|
py_find_1st==1.1.6
|
||||||
|
@ -30,13 +30,13 @@ py_find_1st==1.1.6
|
||||||
# Load ticker files 30% faster
|
# Load ticker files 30% faster
|
||||||
python-rapidjson==1.16
|
python-rapidjson==1.16
|
||||||
# Properly format api responses
|
# Properly format api responses
|
||||||
orjson==3.10.0
|
orjson==3.10.1
|
||||||
|
|
||||||
# Notify systemd
|
# Notify systemd
|
||||||
sdnotify==0.3.2
|
sdnotify==0.3.2
|
||||||
|
|
||||||
# API Server
|
# API Server
|
||||||
fastapi==0.110.1
|
fastapi==0.110.2
|
||||||
pydantic==2.7.0
|
pydantic==2.7.0
|
||||||
uvicorn==0.29.0
|
uvicorn==0.29.0
|
||||||
pyjwt==2.8.0
|
pyjwt==2.8.0
|
||||||
|
|
|
@ -191,7 +191,7 @@ class ClientProtocol:
|
||||||
self.logger.info("Empty DataFrame")
|
self.logger.info("Empty DataFrame")
|
||||||
|
|
||||||
async def _handle_default(self, name, type, data):
|
async def _handle_default(self, name, type, data):
|
||||||
self.logger.info("Unkown message of type {type} received...")
|
self.logger.info("Unknown message of type {type} received...")
|
||||||
self.logger.info(data)
|
self.logger.info(data)
|
||||||
|
|
||||||
|
|
||||||
|
@ -201,7 +201,7 @@ async def create_client(
|
||||||
token,
|
token,
|
||||||
scheme='ws',
|
scheme='ws',
|
||||||
name='default',
|
name='default',
|
||||||
protocol=ClientProtocol(),
|
protocol=None,
|
||||||
sleep_time=10,
|
sleep_time=10,
|
||||||
ping_timeout=10,
|
ping_timeout=10,
|
||||||
wait_timeout=30,
|
wait_timeout=30,
|
||||||
|
@ -216,6 +216,8 @@ async def create_client(
|
||||||
:param name: The name of the producer
|
:param name: The name of the producer
|
||||||
:param **kwargs: Any extra kwargs passed to websockets.connect
|
:param **kwargs: Any extra kwargs passed to websockets.connect
|
||||||
"""
|
"""
|
||||||
|
if not protocol:
|
||||||
|
protocol = ClientProtocol()
|
||||||
|
|
||||||
while 1:
|
while 1:
|
||||||
try:
|
try:
|
||||||
|
@ -277,7 +279,7 @@ async def create_client(
|
||||||
continue
|
continue
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# An unforseen error has occurred, log and try reconnecting again
|
# An unforeseen error has occurred, log and try reconnecting again
|
||||||
logger.error("Unexpected error has occurred:")
|
logger.error("Unexpected error has occurred:")
|
||||||
logger.exception(e)
|
logger.exception(e)
|
||||||
|
|
||||||
|
|
2
setup.py
2
setup.py
|
@ -73,7 +73,7 @@ setup(
|
||||||
'ccxt>=4.2.47',
|
'ccxt>=4.2.47',
|
||||||
'SQLAlchemy>=2.0.6',
|
'SQLAlchemy>=2.0.6',
|
||||||
'python-telegram-bot>=20.1',
|
'python-telegram-bot>=20.1',
|
||||||
'arrow>=1.0.0',
|
'humanize>=4.0.0',
|
||||||
'cachetools',
|
'cachetools',
|
||||||
'requests',
|
'requests',
|
||||||
'httpx>=0.24.1',
|
'httpx>=0.24.1',
|
||||||
|
|
|
@ -1609,4 +1609,4 @@ def test_start_show_config(capsys, caplog):
|
||||||
assert "Your combined configuration is:" in captured.out
|
assert "Your combined configuration is:" in captured.out
|
||||||
assert '"max_open_trades":' in captured.out
|
assert '"max_open_trades":' in captured.out
|
||||||
assert '"secret": "REDACTED"' not in captured.out
|
assert '"secret": "REDACTED"' not in captured.out
|
||||||
assert log_has_re(r'Sensitive information will be shown in the upcomming output.*', caplog)
|
assert log_has_re(r'Sensitive information will be shown in the upcoming output.*', caplog)
|
||||||
|
|
|
@ -49,10 +49,10 @@ def pytest_addoption(parser):
|
||||||
|
|
||||||
def pytest_configure(config):
|
def pytest_configure(config):
|
||||||
config.addinivalue_line(
|
config.addinivalue_line(
|
||||||
"markers", "longrun: mark test that is running slowly and should not be run regularily"
|
"markers", "longrun: mark test that is running slowly and should not be run regularly"
|
||||||
)
|
)
|
||||||
if not config.option.longrun:
|
if not config.option.longrun:
|
||||||
setattr(config.option, 'markexpr', 'not longrun')
|
config.option.markexpr = 'not longrun'
|
||||||
|
|
||||||
|
|
||||||
class FixtureScheduler(LoadScopeScheduling):
|
class FixtureScheduler(LoadScopeScheduling):
|
||||||
|
@ -490,10 +490,10 @@ def user_dir(mocker, tmp_path) -> Path:
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
@pytest.fixture(autouse=True)
|
||||||
def patch_coingekko(mocker) -> None:
|
def patch_coingecko(mocker) -> None:
|
||||||
"""
|
"""
|
||||||
Mocker to coingekko to speed up tests
|
Mocker to coingecko to speed up tests
|
||||||
:param mocker: mocker to patch coingekko class
|
:param mocker: mocker to patch coingecko class
|
||||||
:return: None
|
:return: None
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
|
@ -16,7 +16,7 @@ from freqtrade.data.metrics import (calculate_cagr, calculate_calmar, calculate_
|
||||||
calculate_expectancy, calculate_market_change,
|
calculate_expectancy, calculate_market_change,
|
||||||
calculate_max_drawdown, calculate_sharpe, calculate_sortino,
|
calculate_max_drawdown, calculate_sharpe, calculate_sortino,
|
||||||
calculate_underwater, combine_dataframes_with_mean,
|
calculate_underwater, combine_dataframes_with_mean,
|
||||||
create_cum_profit)
|
combined_dataframes_with_rel_mean, create_cum_profit)
|
||||||
from freqtrade.exceptions import OperationalException
|
from freqtrade.exceptions import OperationalException
|
||||||
from freqtrade.util import dt_utc
|
from freqtrade.util import dt_utc
|
||||||
from tests.conftest import CURRENT_TEST_STRATEGY, create_mock_trades
|
from tests.conftest import CURRENT_TEST_STRATEGY, create_mock_trades
|
||||||
|
@ -251,10 +251,29 @@ def test_combine_dataframes_with_mean(testdatadir):
|
||||||
assert "mean" in df.columns
|
assert "mean" in df.columns
|
||||||
|
|
||||||
|
|
||||||
|
def test_combined_dataframes_with_rel_mean(testdatadir):
|
||||||
|
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||||
|
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='5m')
|
||||||
|
df = combined_dataframes_with_rel_mean(
|
||||||
|
data,
|
||||||
|
datetime(2018, 1, 12, tzinfo=timezone.utc),
|
||||||
|
datetime(2018, 1, 28, tzinfo=timezone.utc)
|
||||||
|
)
|
||||||
|
assert isinstance(df, DataFrame)
|
||||||
|
assert "ETH/BTC" not in df.columns
|
||||||
|
assert "ADA/BTC" not in df.columns
|
||||||
|
assert "mean" in df.columns
|
||||||
|
assert "rel_mean" in df.columns
|
||||||
|
assert "count" in df.columns
|
||||||
|
assert df.iloc[0]['count'] == 2
|
||||||
|
assert df.iloc[-1]['count'] == 2
|
||||||
|
assert len(df) < len(data['ETH/BTC'])
|
||||||
|
|
||||||
|
|
||||||
def test_combine_dataframes_with_mean_no_data(testdatadir):
|
def test_combine_dataframes_with_mean_no_data(testdatadir):
|
||||||
pairs = ["ETH/BTC", "ADA/BTC"]
|
pairs = ["ETH/BTC", "ADA/BTC"]
|
||||||
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='6m')
|
data = load_data(datadir=testdatadir, pairs=pairs, timeframe='6m')
|
||||||
with pytest.raises(ValueError, match=r"No objects to concatenate"):
|
with pytest.raises(ValueError, match=r"No data provided\."):
|
||||||
combine_dataframes_with_mean(data)
|
combine_dataframes_with_mean(data)
|
||||||
|
|
||||||
|
|
||||||
|
@ -463,12 +482,12 @@ def test_calculate_max_drawdown2():
|
||||||
assert drawdown == 0.043965
|
assert drawdown == 0.043965
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('profits,relative,highd,lowd,result,result_rel', [
|
@pytest.mark.parametrize('profits,relative,highd,lowdays,result,result_rel', [
|
||||||
([0.0, -500.0, 500.0, 10000.0, -1000.0], False, 3, 4, 1000.0, 0.090909),
|
([0.0, -500.0, 500.0, 10000.0, -1000.0], False, 3, 4, 1000.0, 0.090909),
|
||||||
([0.0, -500.0, 500.0, 10000.0, -1000.0], True, 0, 1, 500.0, 0.5),
|
([0.0, -500.0, 500.0, 10000.0, -1000.0], True, 0, 1, 500.0, 0.5),
|
||||||
|
|
||||||
])
|
])
|
||||||
def test_calculate_max_drawdown_abs(profits, relative, highd, lowd, result, result_rel):
|
def test_calculate_max_drawdown_abs(profits, relative, highd, lowdays, result, result_rel):
|
||||||
"""
|
"""
|
||||||
Test case from issue https://github.com/freqtrade/freqtrade/issues/6655
|
Test case from issue https://github.com/freqtrade/freqtrade/issues/6655
|
||||||
[1000, 500, 1000, 11000, 10000] # absolute results
|
[1000, 500, 1000, 11000, 10000] # absolute results
|
||||||
|
@ -488,7 +507,7 @@ def test_calculate_max_drawdown_abs(profits, relative, highd, lowd, result, resu
|
||||||
assert isinstance(drawdown, float)
|
assert isinstance(drawdown, float)
|
||||||
assert isinstance(drawdown_rel, float)
|
assert isinstance(drawdown_rel, float)
|
||||||
assert hdate == init_date + timedelta(days=highd)
|
assert hdate == init_date + timedelta(days=highd)
|
||||||
assert ldate == init_date + timedelta(days=lowd)
|
assert ldate == init_date + timedelta(days=lowdays)
|
||||||
|
|
||||||
# High must be before low
|
# High must be before low
|
||||||
assert hdate < ldate
|
assert hdate < ldate
|
||||||
|
|
|
@ -251,7 +251,7 @@ def test_datahandler__check_empty_df(testdatadir, caplog):
|
||||||
# @pytest.mark.parametrize('datahandler', [])
|
# @pytest.mark.parametrize('datahandler', [])
|
||||||
@pytest.mark.skip("All datahandlers currently support trades data.")
|
@pytest.mark.skip("All datahandlers currently support trades data.")
|
||||||
def test_datahandler_trades_not_supported(datahandler, testdatadir, ):
|
def test_datahandler_trades_not_supported(datahandler, testdatadir, ):
|
||||||
# Currently disabled. Reenable should a new provider not support trades data.
|
# Currently disabled. Re-enable should a new provider not support trades data.
|
||||||
dh = get_datahandler(testdatadir, datahandler)
|
dh = get_datahandler(testdatadir, datahandler)
|
||||||
with pytest.raises(NotImplementedError):
|
with pytest.raises(NotImplementedError):
|
||||||
dh.trades_load('UNITTEST/ETH')
|
dh.trades_load('UNITTEST/ETH')
|
||||||
|
|
|
@ -30,7 +30,7 @@ def test_dp_ohlcv(mocker, default_conf, ohlcv_history, candle_type):
|
||||||
assert dp.ohlcv("UNITTEST/BTC", timeframe, candle_type=candletype) is not ohlcv_history
|
assert dp.ohlcv("UNITTEST/BTC", timeframe, candle_type=candletype) is not ohlcv_history
|
||||||
assert dp.ohlcv("UNITTEST/BTC", timeframe, copy=False, candle_type=candletype) is ohlcv_history
|
assert dp.ohlcv("UNITTEST/BTC", timeframe, copy=False, candle_type=candletype) is ohlcv_history
|
||||||
assert not dp.ohlcv("UNITTEST/BTC", timeframe, candle_type=candletype).empty
|
assert not dp.ohlcv("UNITTEST/BTC", timeframe, candle_type=candletype).empty
|
||||||
assert dp.ohlcv("NONESENSE/AAA", timeframe, candle_type=candletype).empty
|
assert dp.ohlcv("NONSENSE/AAA", timeframe, candle_type=candletype).empty
|
||||||
|
|
||||||
# Test with and without parameter
|
# Test with and without parameter
|
||||||
assert dp.ohlcv(
|
assert dp.ohlcv(
|
||||||
|
@ -114,7 +114,7 @@ def test_get_pair_dataframe(mocker, default_conf, ohlcv_history, candle_type):
|
||||||
assert dp.get_pair_dataframe("UNITTEST/BTC", timeframe,
|
assert dp.get_pair_dataframe("UNITTEST/BTC", timeframe,
|
||||||
candle_type=candle_type) is not ohlcv_history
|
candle_type=candle_type) is not ohlcv_history
|
||||||
assert not dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type).empty
|
assert not dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type).empty
|
||||||
assert dp.get_pair_dataframe("NONESENSE/AAA", timeframe, candle_type=candle_type).empty
|
assert dp.get_pair_dataframe("NONSENSE/AAA", timeframe, candle_type=candle_type).empty
|
||||||
|
|
||||||
# Test with and without parameter
|
# Test with and without parameter
|
||||||
assert dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type)\
|
assert dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type)\
|
||||||
|
@ -125,7 +125,7 @@ def test_get_pair_dataframe(mocker, default_conf, ohlcv_history, candle_type):
|
||||||
assert dp.runmode == RunMode.LIVE
|
assert dp.runmode == RunMode.LIVE
|
||||||
assert isinstance(dp.get_pair_dataframe(
|
assert isinstance(dp.get_pair_dataframe(
|
||||||
"UNITTEST/BTC", timeframe, candle_type=candle_type), DataFrame)
|
"UNITTEST/BTC", timeframe, candle_type=candle_type), DataFrame)
|
||||||
assert dp.get_pair_dataframe("NONESENSE/AAA", timeframe, candle_type=candle_type).empty
|
assert dp.get_pair_dataframe("NONSENSE/AAA", timeframe, candle_type=candle_type).empty
|
||||||
|
|
||||||
historymock = MagicMock(return_value=ohlcv_history)
|
historymock = MagicMock(return_value=ohlcv_history)
|
||||||
mocker.patch("freqtrade.data.dataprovider.load_pair_history", historymock)
|
mocker.patch("freqtrade.data.dataprovider.load_pair_history", historymock)
|
||||||
|
|
|
@ -226,8 +226,10 @@ def test_edge_heartbeat_calculate(mocker, edge_conf):
|
||||||
assert edge.calculate(edge_conf['exchange']['pair_whitelist']) is False
|
assert edge.calculate(edge_conf['exchange']['pair_whitelist']) is False
|
||||||
|
|
||||||
|
|
||||||
def mocked_load_data(datadir, pairs=[], timeframe='0m',
|
def mocked_load_data(datadir, pairs=None, timeframe='0m',
|
||||||
timerange=None, *args, **kwargs):
|
timerange=None, *args, **kwargs):
|
||||||
|
if pairs is None:
|
||||||
|
pairs = []
|
||||||
hz = 0.1
|
hz = 0.1
|
||||||
base = 0.001
|
base = 0.001
|
||||||
|
|
||||||
|
|
|
@ -3830,7 +3830,7 @@ def test_ohlcv_candle_limit(default_conf, mocker, exchange_name):
|
||||||
[
|
[
|
||||||
("BTC/USDT", 'BTC', 'USDT', "binance", True, False, False, 'spot', {}, True),
|
("BTC/USDT", 'BTC', 'USDT', "binance", True, False, False, 'spot', {}, True),
|
||||||
("USDT/BTC", 'USDT', 'BTC', "binance", True, False, False, 'spot', {}, True),
|
("USDT/BTC", 'USDT', 'BTC', "binance", True, False, False, 'spot', {}, True),
|
||||||
# No seperating /
|
# No separating /
|
||||||
("BTCUSDT", 'BTC', 'USDT', "binance", True, False, False, 'spot', {}, True),
|
("BTCUSDT", 'BTC', 'USDT', "binance", True, False, False, 'spot', {}, True),
|
||||||
("BTCUSDT", None, "USDT", "binance", True, False, False, 'spot', {}, False),
|
("BTCUSDT", None, "USDT", "binance", True, False, False, 'spot', {}, False),
|
||||||
("USDT/BTC", "BTC", None, "binance", True, False, False, 'spot', {}, False),
|
("USDT/BTC", "BTC", None, "binance", True, False, False, 'spot', {}, False),
|
||||||
|
@ -4346,7 +4346,7 @@ def test_combine_funding_and_mark(
|
||||||
('binance', 0, 2, "2021-09-01 00:00:01", "2021-09-01 08:00:00", 30.0, -0.00091409999),
|
('binance', 0, 2, "2021-09-01 00:00:01", "2021-09-01 08:00:00", 30.0, -0.00091409999),
|
||||||
('binance', 0, 2, "2021-08-31 23:58:00", "2021-09-01 08:00:00", 30.0, -0.00091409999),
|
('binance', 0, 2, "2021-08-31 23:58:00", "2021-09-01 08:00:00", 30.0, -0.00091409999),
|
||||||
('binance', 0, 2, "2021-09-01 00:10:01", "2021-09-01 08:00:00", 30.0, -0.0002493),
|
('binance', 0, 2, "2021-09-01 00:10:01", "2021-09-01 08:00:00", 30.0, -0.0002493),
|
||||||
# TODO: Uncoment once _calculate_funding_fees can pas time_in_ratio to exchange._get_funding_fee
|
# TODO: Uncomment once _calculate_funding_fees can pass time_in_ratio to exchange.
|
||||||
# ('kraken', "2021-09-01 00:00:00", "2021-09-01 08:00:00", 30.0, -0.0014937),
|
# ('kraken', "2021-09-01 00:00:00", "2021-09-01 08:00:00", 30.0, -0.0014937),
|
||||||
# ('kraken', "2021-09-01 00:00:15", "2021-09-01 08:00:00", 30.0, -0.0008289),
|
# ('kraken', "2021-09-01 00:00:15", "2021-09-01 08:00:00", 30.0, -0.0008289),
|
||||||
# ('kraken', "2021-09-01 01:00:14", "2021-09-01 08:00:00", 30.0, -0.0008289),
|
# ('kraken', "2021-09-01 01:00:14", "2021-09-01 08:00:00", 30.0, -0.0008289),
|
||||||
|
@ -4358,7 +4358,7 @@ def test_combine_funding_and_mark(
|
||||||
('gate', 0, 2, "2021-09-01 00:00:00", "2021-09-01 12:00:00", 30.0, -0.0009140999),
|
('gate', 0, 2, "2021-09-01 00:00:00", "2021-09-01 12:00:00", 30.0, -0.0009140999),
|
||||||
('gate', 1, 2, "2021-09-01 00:00:01", "2021-09-01 08:00:00", 30.0, -0.0002493),
|
('gate', 1, 2, "2021-09-01 00:00:01", "2021-09-01 08:00:00", 30.0, -0.0002493),
|
||||||
('binance', 0, 2, "2021-09-01 00:00:00", "2021-09-01 08:00:00", 50.0, -0.0015235),
|
('binance', 0, 2, "2021-09-01 00:00:00", "2021-09-01 08:00:00", 50.0, -0.0015235),
|
||||||
# TODO: Uncoment once _calculate_funding_fees can pas time_in_ratio to exchange._get_funding_fee
|
# TODO: Uncomment once _calculate_funding_fees can pass time_in_ratio to exchange.
|
||||||
# ('kraken', "2021-09-01 00:00:00", "2021-09-01 08:00:00", 50.0, -0.0024895),
|
# ('kraken', "2021-09-01 00:00:00", "2021-09-01 08:00:00", 50.0, -0.0024895),
|
||||||
])
|
])
|
||||||
def test__fetch_and_calculate_funding_fees(
|
def test__fetch_and_calculate_funding_fees(
|
||||||
|
@ -5133,7 +5133,7 @@ def test_get_maintenance_ratio_and_amt(
|
||||||
mocker.patch(f'{EXMS}.exchange_has', return_value=True)
|
mocker.patch(f'{EXMS}.exchange_has', return_value=True)
|
||||||
exchange = get_patched_exchange(mocker, default_conf, api_mock)
|
exchange = get_patched_exchange(mocker, default_conf, api_mock)
|
||||||
exchange._leverage_tiers = leverage_tiers
|
exchange._leverage_tiers = leverage_tiers
|
||||||
exchange.get_maintenance_ratio_and_amt(pair, value) == (mmr, maintAmt)
|
assert exchange.get_maintenance_ratio_and_amt(pair, value) == (mmr, maintAmt)
|
||||||
|
|
||||||
|
|
||||||
def test_get_max_leverage_futures(default_conf, mocker, leverage_tiers):
|
def test_get_max_leverage_futures(default_conf, mocker, leverage_tiers):
|
||||||
|
|
|
@ -472,7 +472,7 @@ def test_load_leverage_tiers_okx(default_conf, mocker, markets, tmp_path, caplog
|
||||||
exchange.load_leverage_tiers()
|
exchange.load_leverage_tiers()
|
||||||
assert not log_has(logmsg, caplog)
|
assert not log_has(logmsg, caplog)
|
||||||
|
|
||||||
api_mock.fetch_market_leverage_tiers.call_count == 0
|
assert api_mock.fetch_market_leverage_tiers.call_count == 0
|
||||||
# 2 day passes ...
|
# 2 day passes ...
|
||||||
time_machine.move_to(datetime.now() + timedelta(weeks=5))
|
time_machine.move_to(datetime.now() + timedelta(weeks=5))
|
||||||
exchange.load_leverage_tiers()
|
exchange.load_leverage_tiers()
|
||||||
|
@ -500,7 +500,7 @@ def test__set_leverage_okx(mocker, default_conf):
|
||||||
'posSide': 'net'}
|
'posSide': 'net'}
|
||||||
api_mock.set_leverage = MagicMock(side_effect=ccxt.NetworkError())
|
api_mock.set_leverage = MagicMock(side_effect=ccxt.NetworkError())
|
||||||
exchange._lev_prep('BTC/USDT:USDT', 3.2, 'buy')
|
exchange._lev_prep('BTC/USDT:USDT', 3.2, 'buy')
|
||||||
api_mock.fetch_leverage.call_count == 1
|
assert api_mock.fetch_leverage.call_count == 1
|
||||||
|
|
||||||
api_mock.fetch_leverage = MagicMock(side_effect=ccxt.NetworkError())
|
api_mock.fetch_leverage = MagicMock(side_effect=ccxt.NetworkError())
|
||||||
ccxt_exceptionhandlers(
|
ccxt_exceptionhandlers(
|
||||||
|
|
|
@ -133,6 +133,6 @@ def test_freqai_backtest_consistent_timerange(mocker, freqai_conf):
|
||||||
backtesting = Backtesting(deepcopy(freqai_conf))
|
backtesting = Backtesting(deepcopy(freqai_conf))
|
||||||
backtesting.start()
|
backtesting.start()
|
||||||
|
|
||||||
gbs.call_args[1]['min_date'] == datetime(2021, 11, 20, 0, 0, tzinfo=timezone.utc)
|
assert gbs.call_args[1]['min_date'] == datetime(2021, 11, 20, 0, 0, tzinfo=timezone.utc)
|
||||||
gbs.call_args[1]['max_date'] == datetime(2021, 11, 21, 0, 0, tzinfo=timezone.utc)
|
assert gbs.call_args[1]['max_date'] == datetime(2021, 11, 21, 0, 0, tzinfo=timezone.utc)
|
||||||
Backtesting.cleanup()
|
Backtesting.cleanup()
|
||||||
|
|
|
@ -143,7 +143,7 @@ def test_get_timerange_from_backtesting_live_df_pred_not_found(mocker, freqai_co
|
||||||
def test_set_initial_return_values(mocker, freqai_conf):
|
def test_set_initial_return_values(mocker, freqai_conf):
|
||||||
"""
|
"""
|
||||||
Simple test of the set initial return values that ensures
|
Simple test of the set initial return values that ensures
|
||||||
we are concatening and ffilling values properly.
|
we are concatenating and ffilling values properly.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
|
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
|
||||||
|
|
|
@ -403,7 +403,7 @@ def test_backtesting_fit_live_predictions(mocker, freqai_conf, caplog):
|
||||||
freqai.dk.get_unique_classes_from_labels(df)
|
freqai.dk.get_unique_classes_from_labels(df)
|
||||||
freqai.dk.pair = "ADA/BTC"
|
freqai.dk.pair = "ADA/BTC"
|
||||||
freqai.dk.full_df = df.fillna(0)
|
freqai.dk.full_df = df.fillna(0)
|
||||||
freqai.dk.full_df
|
|
||||||
assert "&-s_close_mean" not in freqai.dk.full_df.columns
|
assert "&-s_close_mean" not in freqai.dk.full_df.columns
|
||||||
assert "&-s_close_std" not in freqai.dk.full_df.columns
|
assert "&-s_close_std" not in freqai.dk.full_df.columns
|
||||||
freqai.backtesting_fit_live_predictions(freqai.dk)
|
freqai.backtesting_fit_live_predictions(freqai.dk)
|
||||||
|
|
|
@ -285,7 +285,7 @@ def test_edge_overrides_stoploss(limit_order, fee, caplog, mocker,
|
||||||
'last': enter_price * buy_price_mult,
|
'last': enter_price * buy_price_mult,
|
||||||
})
|
})
|
||||||
|
|
||||||
# stoploss shoud be hit
|
# stoploss should be hit
|
||||||
assert freqtrade.handle_trade(trade) is not ignore_strat_sl
|
assert freqtrade.handle_trade(trade) is not ignore_strat_sl
|
||||||
if not ignore_strat_sl:
|
if not ignore_strat_sl:
|
||||||
assert log_has_re('Exit for NEO/BTC detected. Reason: stop_loss.*', caplog)
|
assert log_has_re('Exit for NEO/BTC detected. Reason: stop_loss.*', caplog)
|
||||||
|
@ -1398,7 +1398,7 @@ def test_update_trade_state_sell(
|
||||||
assert order.status == 'open'
|
assert order.status == 'open'
|
||||||
freqtrade.update_trade_state(trade, trade.open_orders_ids[-1], l_order)
|
freqtrade.update_trade_state(trade, trade.open_orders_ids[-1], l_order)
|
||||||
assert trade.amount == l_order['amount']
|
assert trade.amount == l_order['amount']
|
||||||
# Wallet needs to be updated after closing a limit-sell order to reenable buying
|
# Wallet needs to be updated after closing a limit-sell order to re-enable buying
|
||||||
assert wallet_mock.call_count == 1
|
assert wallet_mock.call_count == 1
|
||||||
assert not trade.is_open
|
assert not trade.is_open
|
||||||
# Order is updated by update_trade_state
|
# Order is updated by update_trade_state
|
||||||
|
@ -3122,7 +3122,7 @@ def test_exit_profit_only(
|
||||||
if profit_only:
|
if profit_only:
|
||||||
assert freqtrade.handle_trade(trade) is False
|
assert freqtrade.handle_trade(trade) is False
|
||||||
# Custom-exit is called
|
# Custom-exit is called
|
||||||
freqtrade.strategy.custom_exit.call_count == 1
|
assert freqtrade.strategy.custom_exit.call_count == 1
|
||||||
|
|
||||||
patch_get_signal(freqtrade, enter_long=False, exit_short=is_short, exit_long=not is_short)
|
patch_get_signal(freqtrade, enter_long=False, exit_short=is_short, exit_long=not is_short)
|
||||||
assert freqtrade.handle_trade(trade) is handle_first
|
assert freqtrade.handle_trade(trade) is handle_first
|
||||||
|
@ -3240,7 +3240,7 @@ def test_locked_pairs(default_conf_usdt, ticker_usdt, fee,
|
||||||
)
|
)
|
||||||
trade.close(ticker_usdt_sell_down()['bid'])
|
trade.close(ticker_usdt_sell_down()['bid'])
|
||||||
assert freqtrade.strategy.is_pair_locked(trade.pair, side='*')
|
assert freqtrade.strategy.is_pair_locked(trade.pair, side='*')
|
||||||
# Boths sides are locked
|
# Both sides are locked
|
||||||
assert freqtrade.strategy.is_pair_locked(trade.pair, side='long')
|
assert freqtrade.strategy.is_pair_locked(trade.pair, side='long')
|
||||||
assert freqtrade.strategy.is_pair_locked(trade.pair, side='short')
|
assert freqtrade.strategy.is_pair_locked(trade.pair, side='short')
|
||||||
|
|
||||||
|
@ -4558,6 +4558,67 @@ def test_handle_onexchange_order(mocker, default_conf_usdt, limit_order, is_shor
|
||||||
assert trade.exit_reason == ExitType.SOLD_ON_EXCHANGE.value
|
assert trade.exit_reason == ExitType.SOLD_ON_EXCHANGE.value
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.usefixtures("init_persistence")
|
||||||
|
@pytest.mark.parametrize("is_short", [False, True])
|
||||||
|
@pytest.mark.parametrize("factor,adjusts", [
|
||||||
|
(0.99, True),
|
||||||
|
(0.97, False),
|
||||||
|
])
|
||||||
|
def test_handle_onexchange_order_changed_amount(
|
||||||
|
mocker, default_conf_usdt, limit_order, is_short, caplog,
|
||||||
|
factor, adjusts,
|
||||||
|
):
|
||||||
|
default_conf_usdt['dry_run'] = False
|
||||||
|
freqtrade = get_patched_freqtradebot(mocker, default_conf_usdt)
|
||||||
|
mock_uts = mocker.spy(freqtrade, 'update_trade_state')
|
||||||
|
|
||||||
|
entry_order = limit_order[entry_side(is_short)]
|
||||||
|
mock_fo = mocker.patch(f'{EXMS}.fetch_orders', return_value=[
|
||||||
|
entry_order,
|
||||||
|
])
|
||||||
|
|
||||||
|
trade = Trade(
|
||||||
|
pair='ETH/USDT',
|
||||||
|
fee_open=0.001,
|
||||||
|
base_currency='ETH',
|
||||||
|
fee_close=0.001,
|
||||||
|
open_rate=entry_order['price'],
|
||||||
|
open_date=dt_now(),
|
||||||
|
stake_amount=entry_order['cost'],
|
||||||
|
amount=entry_order['amount'],
|
||||||
|
exchange="binance",
|
||||||
|
is_short=is_short,
|
||||||
|
leverage=1,
|
||||||
|
)
|
||||||
|
freqtrade.wallets = MagicMock()
|
||||||
|
freqtrade.wallets.get_total = MagicMock(return_value=entry_order['amount'] * factor)
|
||||||
|
|
||||||
|
trade.orders.append(Order.parse_from_ccxt_object(
|
||||||
|
entry_order, 'ADA/USDT', entry_side(is_short))
|
||||||
|
)
|
||||||
|
Trade.session.add(trade)
|
||||||
|
|
||||||
|
# assert trade.amount > entry_order['amount']
|
||||||
|
|
||||||
|
freqtrade.handle_onexchange_order(trade)
|
||||||
|
assert mock_uts.call_count == 1
|
||||||
|
assert mock_fo.call_count == 1
|
||||||
|
|
||||||
|
trade = Trade.session.scalars(select(Trade)).first()
|
||||||
|
|
||||||
|
assert log_has_re(r'.*has a total of .* but the Wallet shows.*', caplog)
|
||||||
|
if adjusts:
|
||||||
|
# Trade amount is updated
|
||||||
|
assert trade.amount == entry_order['amount'] * factor
|
||||||
|
assert log_has_re(r'.*Adjusting trade amount to.*', caplog)
|
||||||
|
else:
|
||||||
|
assert log_has_re(r'.*Refusing to adjust as the difference.*', caplog)
|
||||||
|
assert trade.amount == entry_order['amount']
|
||||||
|
|
||||||
|
assert len(trade.orders) == 1
|
||||||
|
assert trade.is_open is True
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.usefixtures("init_persistence")
|
@pytest.mark.usefixtures("init_persistence")
|
||||||
@pytest.mark.parametrize("is_short", [False, True])
|
@pytest.mark.parametrize("is_short", [False, True])
|
||||||
def test_handle_onexchange_order_exit(mocker, default_conf_usdt, limit_order, is_short, caplog):
|
def test_handle_onexchange_order_exit(mocker, default_conf_usdt, limit_order, is_short, caplog):
|
||||||
|
@ -4829,7 +4890,7 @@ def test_update_funding_fees(
|
||||||
freqtrade.execute_entry('ETH/USDT', 123, is_short=is_short)
|
freqtrade.execute_entry('ETH/USDT', 123, is_short=is_short)
|
||||||
freqtrade.execute_entry('LTC/USDT', 2.0, is_short=is_short)
|
freqtrade.execute_entry('LTC/USDT', 2.0, is_short=is_short)
|
||||||
freqtrade.execute_entry('XRP/USDT', 123, is_short=is_short)
|
freqtrade.execute_entry('XRP/USDT', 123, is_short=is_short)
|
||||||
multipl = 1 if is_short else -1
|
multiple = 1 if is_short else -1
|
||||||
trades = Trade.get_open_trades()
|
trades = Trade.get_open_trades()
|
||||||
assert len(trades) == 3
|
assert len(trades) == 3
|
||||||
for trade in trades:
|
for trade in trades:
|
||||||
|
@ -4847,7 +4908,7 @@ def test_update_funding_fees(
|
||||||
assert trade.funding_fees == pytest.approx(sum(
|
assert trade.funding_fees == pytest.approx(sum(
|
||||||
trade.amount *
|
trade.amount *
|
||||||
mark_prices[trade.pair].iloc[1:2]['open'] *
|
mark_prices[trade.pair].iloc[1:2]['open'] *
|
||||||
funding_rates[trade.pair].iloc[1:2]['open'] * multipl
|
funding_rates[trade.pair].iloc[1:2]['open'] * multiple
|
||||||
))
|
))
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
@ -4859,7 +4920,7 @@ def test_update_funding_fees(
|
||||||
trade.amount *
|
trade.amount *
|
||||||
mark_prices[trade.pair].iloc[1:2]['open'] *
|
mark_prices[trade.pair].iloc[1:2]['open'] *
|
||||||
funding_rates[trade.pair].iloc[1:2]['open'] *
|
funding_rates[trade.pair].iloc[1:2]['open'] *
|
||||||
multipl
|
multiple
|
||||||
))
|
))
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -107,7 +107,7 @@ tc5 = BTContainer(data=[
|
||||||
trades=[BTrade(exit_reason=ExitType.ROI, open_tick=1, close_tick=3)]
|
trades=[BTrade(exit_reason=ExitType.ROI, open_tick=1, close_tick=3)]
|
||||||
)
|
)
|
||||||
|
|
||||||
# Test 6: Drops 3% / Recovers 6% Positive / Closes 1% positve, Stop-Loss triggers 2% Loss
|
# Test 6: Drops 3% / Recovers 6% Positive / Closes 1% positive, Stop-Loss triggers 2% Loss
|
||||||
# stop-loss: 2% ROI: 5%
|
# stop-loss: 2% ROI: 5%
|
||||||
tc6 = BTContainer(data=[
|
tc6 = BTContainer(data=[
|
||||||
# D O H L C V EL XL ES Xs BT
|
# D O H L C V EL XL ES Xs BT
|
||||||
|
@ -121,7 +121,7 @@ tc6 = BTContainer(data=[
|
||||||
trades=[BTrade(exit_reason=ExitType.STOP_LOSS, open_tick=1, close_tick=2)]
|
trades=[BTrade(exit_reason=ExitType.STOP_LOSS, open_tick=1, close_tick=2)]
|
||||||
)
|
)
|
||||||
|
|
||||||
# Test 7: 6% Positive / 1% Negative / Close 1% Positve, ROI Triggers 3% Gain
|
# Test 7: 6% Positive / 1% Negative / Close 1% Positive, ROI Triggers 3% Gain
|
||||||
# stop-loss: 2% ROI: 3%
|
# stop-loss: 2% ROI: 3%
|
||||||
tc7 = BTContainer(data=[
|
tc7 = BTContainer(data=[
|
||||||
# D O H L C V EL XL ES Xs BT
|
# D O H L C V EL XL ES Xs BT
|
||||||
|
|
|
@ -87,9 +87,9 @@ def test_backtest_position_adjustment(default_conf, fee, mocker, testdatadir) ->
|
||||||
|
|
||||||
for _, t in results.iterrows():
|
for _, t in results.iterrows():
|
||||||
ln = data_pair.loc[data_pair["date"] == t["open_date"]]
|
ln = data_pair.loc[data_pair["date"] == t["open_date"]]
|
||||||
# Check open trade rate alignes to open rate
|
# Check open trade rate aligns to open rate
|
||||||
assert ln is not None
|
assert ln is not None
|
||||||
# check close trade rate alignes to close rate or is between high and low
|
# check close trade rate aligns to close rate or is between high and low
|
||||||
ln = data_pair.loc[data_pair["date"] == t["close_date"]]
|
ln = data_pair.loc[data_pair["date"] == t["close_date"]]
|
||||||
assert (round(ln.iloc[0]["open"], 6) == round(t["close_rate"], 6) or
|
assert (round(ln.iloc[0]["open"], 6) == round(t["close_rate"], 6) or
|
||||||
round(ln.iloc[0]["low"], 6) < round(
|
round(ln.iloc[0]["low"], 6) < round(
|
||||||
|
|
|
@ -901,6 +901,7 @@ def test_in_strategy_auto_hyperopt(mocker, hyperopt_conf, tmp_path, fee) -> None
|
||||||
hyperopt.get_optimizer([], 2)
|
hyperopt.get_optimizer([], 2)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.filterwarnings("ignore::DeprecationWarning")
|
||||||
def test_in_strategy_auto_hyperopt_with_parallel(mocker, hyperopt_conf, tmp_path, fee) -> None:
|
def test_in_strategy_auto_hyperopt_with_parallel(mocker, hyperopt_conf, tmp_path, fee) -> None:
|
||||||
mocker.patch(f'{EXMS}.validate_config', MagicMock())
|
mocker.patch(f'{EXMS}.validate_config', MagicMock())
|
||||||
mocker.patch(f'{EXMS}.get_fee', fee)
|
mocker.patch(f'{EXMS}.get_fee', fee)
|
||||||
|
|
|
@ -172,8 +172,8 @@ def test__pprint_dict():
|
||||||
}"""
|
}"""
|
||||||
|
|
||||||
|
|
||||||
def test_get_strategy_filename(default_conf):
|
def test_get_strategy_filename(default_conf, tmp_path):
|
||||||
|
default_conf['user_data_dir'] = tmp_path
|
||||||
x = HyperoptTools.get_strategy_filename(default_conf, 'StrategyTestV3')
|
x = HyperoptTools.get_strategy_filename(default_conf, 'StrategyTestV3')
|
||||||
assert isinstance(x, Path)
|
assert isinstance(x, Path)
|
||||||
assert x == Path(__file__).parents[1] / 'strategy/strats/strategy_test_v3.py'
|
assert x == Path(__file__).parents[1] / 'strategy/strats/strategy_test_v3.py'
|
||||||
|
@ -233,6 +233,7 @@ def test_export_params(tmp_path):
|
||||||
|
|
||||||
def test_try_export_params(default_conf, tmp_path, caplog, mocker):
|
def test_try_export_params(default_conf, tmp_path, caplog, mocker):
|
||||||
default_conf['disableparamexport'] = False
|
default_conf['disableparamexport'] = False
|
||||||
|
default_conf['user_data_dir'] = tmp_path
|
||||||
export_mock = mocker.patch("freqtrade.optimize.hyperopt_tools.HyperoptTools.export_params")
|
export_mock = mocker.patch("freqtrade.optimize.hyperopt_tools.HyperoptTools.export_params")
|
||||||
|
|
||||||
filename = tmp_path / f"{CURRENT_TEST_STRATEGY}.json"
|
filename = tmp_path / f"{CURRENT_TEST_STRATEGY}.json"
|
||||||
|
|
|
@ -14,7 +14,8 @@ from tests.conftest import EXMS, get_args, log_has_re, patch_exchange
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def lookahead_conf(default_conf_usdt):
|
def lookahead_conf(default_conf_usdt, tmp_path):
|
||||||
|
default_conf_usdt['user_data_dir'] = tmp_path
|
||||||
default_conf_usdt['minimum_trade_amount'] = 10
|
default_conf_usdt['minimum_trade_amount'] = 10
|
||||||
default_conf_usdt['targeted_trade_amount'] = 20
|
default_conf_usdt['targeted_trade_amount'] = 20
|
||||||
default_conf_usdt['timerange'] = '20220101-20220501'
|
default_conf_usdt['timerange'] = '20220101-20220501'
|
||||||
|
@ -152,7 +153,7 @@ def test_lookahead_helper_text_table_lookahead_analysis_instances(lookahead_conf
|
||||||
assert data[0][2].__contains__('too few trades')
|
assert data[0][2].__contains__('too few trades')
|
||||||
assert len(data[0]) == 3
|
assert len(data[0]) == 3
|
||||||
|
|
||||||
# now check for an error which occured after enough trades
|
# now check for an error which occurred after enough trades
|
||||||
analysis.total_signals = 12
|
analysis.total_signals = 12
|
||||||
analysis.false_entry_signals = 11
|
analysis.false_entry_signals = 11
|
||||||
analysis.false_exit_signals = 10
|
analysis.false_exit_signals = 10
|
||||||
|
|
|
@ -129,7 +129,7 @@ def test_generate_backtest_stats(default_conf, testdatadir, tmp_path):
|
||||||
assert strat_stats['backtest_start'] == min_date.strftime(DATETIME_PRINT_FORMAT)
|
assert strat_stats['backtest_start'] == min_date.strftime(DATETIME_PRINT_FORMAT)
|
||||||
assert strat_stats['backtest_end'] == max_date.strftime(DATETIME_PRINT_FORMAT)
|
assert strat_stats['backtest_end'] == max_date.strftime(DATETIME_PRINT_FORMAT)
|
||||||
assert strat_stats['total_trades'] == len(results['DefStrat']['results'])
|
assert strat_stats['total_trades'] == len(results['DefStrat']['results'])
|
||||||
# Above sample had no loosing trade
|
# Above sample had no losing trade
|
||||||
assert strat_stats['max_drawdown_account'] == 0.0
|
assert strat_stats['max_drawdown_account'] == 0.0
|
||||||
|
|
||||||
# Retry with losing trade
|
# Retry with losing trade
|
||||||
|
@ -229,6 +229,28 @@ def test_store_backtest_stats(testdatadir, mocker):
|
||||||
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir / 'testresult'))
|
assert str(dump_mock.call_args_list[0][0][0]).startswith(str(testdatadir / 'testresult'))
|
||||||
|
|
||||||
|
|
||||||
|
def test_store_backtest_stats_real(tmp_path):
|
||||||
|
data = {'metadata': {}, 'strategy': {}, 'strategy_comparison': []}
|
||||||
|
store_backtest_stats(tmp_path, data, '2022_01_01_15_05_13')
|
||||||
|
|
||||||
|
assert (tmp_path / 'backtest-result-2022_01_01_15_05_13.json').is_file()
|
||||||
|
assert (tmp_path / 'backtest-result-2022_01_01_15_05_13.meta.json').is_file()
|
||||||
|
assert not (tmp_path / 'backtest-result-2022_01_01_15_05_13_market_change.feather').is_file()
|
||||||
|
assert (tmp_path / LAST_BT_RESULT_FN).is_file()
|
||||||
|
fn = get_latest_backtest_filename(tmp_path)
|
||||||
|
assert fn == 'backtest-result-2022_01_01_15_05_13.json'
|
||||||
|
|
||||||
|
store_backtest_stats(tmp_path, data, '2024_01_01_15_05_25', market_change_data=pd.DataFrame())
|
||||||
|
assert (tmp_path / 'backtest-result-2024_01_01_15_05_25.json').is_file()
|
||||||
|
assert (tmp_path / 'backtest-result-2024_01_01_15_05_25.meta.json').is_file()
|
||||||
|
assert (tmp_path / 'backtest-result-2024_01_01_15_05_25_market_change.feather').is_file()
|
||||||
|
assert (tmp_path / LAST_BT_RESULT_FN).is_file()
|
||||||
|
|
||||||
|
# Last file reference should be updated
|
||||||
|
fn = get_latest_backtest_filename(tmp_path)
|
||||||
|
assert fn == 'backtest-result-2024_01_01_15_05_25.json'
|
||||||
|
|
||||||
|
|
||||||
def test_store_backtest_candles(testdatadir, mocker):
|
def test_store_backtest_candles(testdatadir, mocker):
|
||||||
|
|
||||||
dump_mock = mocker.patch(
|
dump_mock = mocker.patch(
|
||||||
|
|
|
@ -14,7 +14,8 @@ from tests.conftest import get_args, log_has_re, patch_exchange
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def recursive_conf(default_conf_usdt):
|
def recursive_conf(default_conf_usdt, tmp_path):
|
||||||
|
default_conf_usdt['user_data_dir'] = tmp_path
|
||||||
default_conf_usdt['timerange'] = '20220101-20220501'
|
default_conf_usdt['timerange'] = '20220101-20220501'
|
||||||
|
|
||||||
default_conf_usdt['strategy_path'] = str(
|
default_conf_usdt['strategy_path'] = str(
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user