Merge branch 'develop' into freqai-pytorch-bugfixes

This commit is contained in:
yinon 2023-08-04 12:47:41 +00:00
commit 8ebfb731d8
150 changed files with 2444 additions and 1116 deletions

View File

@ -461,7 +461,7 @@ jobs:
python setup.py sdist bdist_wheel
- name: Publish to PyPI (Test)
uses: pypa/gh-action-pypi-publish@v1.8.7
uses: pypa/gh-action-pypi-publish@v1.8.8
if: (github.event_name == 'release')
with:
user: __token__
@ -469,7 +469,7 @@ jobs:
repository_url: https://test.pypi.org/legacy/
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@v1.8.7
uses: pypa/gh-action-pypi-publish@v1.8.8
if: (github.event_name == 'release')
with:
user: __token__

View File

@ -13,12 +13,12 @@ repos:
- id: mypy
exclude: build_helpers
additional_dependencies:
- types-cachetools==5.3.0.5
- types-cachetools==5.3.0.6
- types-filelock==3.2.7
- types-requests==2.31.0.1
- types-tabulate==0.9.0.2
- types-python-dateutil==2.8.19.13
- SQLAlchemy==2.0.18
- types-requests==2.31.0.2
- types-tabulate==0.9.0.3
- types-python-dateutil==2.8.19.14
- SQLAlchemy==2.0.19
# stages: [push]
- repo: https://github.com/pycqa/isort

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,21 +1,11 @@
# Downloads don't work automatically, since the URL is regenerated via javascript.
# Downloaded from https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib
# vendored Wheels compiled via https://github.com/xmatthias/ta-lib-python/tree/ta_bundled_040
python -m pip install --upgrade pip wheel
$pyv = python -c "import sys; print(f'{sys.version_info.major}.{sys.version_info.minor}')"
if ($pyv -eq '3.8') {
pip install build_helpers\TA_Lib-0.4.26-cp38-cp38-win_amd64.whl
}
if ($pyv -eq '3.9') {
pip install build_helpers\TA_Lib-0.4.26-cp39-cp39-win_amd64.whl
}
if ($pyv -eq '3.10') {
pip install build_helpers\TA_Lib-0.4.26-cp310-cp310-win_amd64.whl
}
if ($pyv -eq '3.11') {
pip install build_helpers\TA_Lib-0.4.26-cp311-cp311-win_amd64.whl
}
pip install --find-links=build_helpers\ TA-Lib
pip install -r requirements-dev.txt
pip install -e .

View File

@ -206,6 +206,6 @@
"recursive_strategy_search": false,
"add_config_files": [],
"reduce_df_footprint": false,
"dataformat_ohlcv": "json",
"dataformat_trades": "jsongz"
"dataformat_ohlcv": "feather",
"dataformat_trades": "feather"
}

View File

@ -32,5 +32,5 @@ services:
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--freqai-model XGBoostClassifier
--strategy SampleStrategy
--freqaimodel XGBoostRegressor
--strategy FreqaiExampleStrategy

View File

@ -103,6 +103,22 @@ The indicators have to be present in your strategy's main DataFrame (either for
timeframe or for informative timeframes) otherwise they will simply be ignored in the script
output.
There are a range of candle and trade-related fields that are included in the analysis so are
automatically accessible by including them on the indicator-list, and these include:
- **open_date :** trade open datetime
- **close_date :** trade close datetime
- **min_rate :** minimum price seen throughout the position
- **max_rate :** maxiumum price seen throughout the position
- **open :** signal candle open price
- **close :** signal candle close price
- **high :** signal candle high price
- **low :** signal candle low price
- **volume :** signal candle volumne
- **profit_ratio :** trade profit ratio
- **profit_abs :** absolute profit return of the trade
### Filtering the trade output by date
To show only trades between dates within your backtested timerange, supply the usual `timerange` option in `YYYYMMDD-[YYYYMMDD]` format:

View File

@ -305,7 +305,7 @@ A backtesting result will look like that:
| Sharpe | 2.97 |
| Calmar | 6.29 |
| Profit factor | 1.11 |
| Expectancy | -0.15 |
| Expectancy (Ratio) | -0.15 (-0.05) |
| Avg. stake amount | 0.001 BTC |
| Total trade volume | 0.429 BTC |
| | |
@ -324,6 +324,7 @@ A backtesting result will look like that:
| Days win/draw/lose | 12 / 82 / 25 |
| Avg. Duration Winners | 4:23:00 |
| Avg. Duration Loser | 6:55:00 |
| Max Consecutive Wins / Loss | 3 / 4 |
| Rejected Entry signals | 3089 |
| Entry/Exit Timeouts | 0 / 0 |
| Canceled Trade Entries | 34 |
@ -409,7 +410,7 @@ It contains some useful key metrics about performance of your strategy on backte
| Sharpe | 2.97 |
| Calmar | 6.29 |
| Profit factor | 1.11 |
| Expectancy | -0.15 |
| Expectancy (Ratio) | -0.15 (-0.05) |
| Avg. stake amount | 0.001 BTC |
| Total trade volume | 0.429 BTC |
| | |
@ -428,6 +429,7 @@ It contains some useful key metrics about performance of your strategy on backte
| Days win/draw/lose | 12 / 82 / 25 |
| Avg. Duration Winners | 4:23:00 |
| Avg. Duration Loser | 6:55:00 |
| Max Consecutive Wins / Loss | 3 / 4 |
| Rejected Entry signals | 3089 |
| Entry/Exit Timeouts | 0 / 0 |
| Canceled Trade Entries | 34 |
@ -467,6 +469,7 @@ It contains some useful key metrics about performance of your strategy on backte
- `Best day` / `Worst day`: Best and worst day based on daily profit.
- `Days win/draw/lose`: Winning / Losing days (draws are usually days without closed trade).
- `Avg. Duration Winners` / `Avg. Duration Loser`: Average durations for winning and losing trades.
- `Max Consecutive Wins / Loss`: Maximum consecutive wins/losses in a row.
- `Rejected Entry signals`: Trade entry signals that could not be acted upon due to `max_open_trades` being reached.
- `Entry/Exit Timeouts`: Entry/exit orders which did not fill (only applicable if custom pricing is used).
- `Canceled Trade Entries`: Number of trades that have been canceled by user request via `adjust_entry_price`.
@ -534,6 +537,7 @@ Since backtesting lacks some detailed information about what happens within a ca
- ROI
- exits are compared to high - but the ROI value is used (e.g. ROI = 2%, high=5% - so the exit will be at 2%)
- exits are never "below the candle", so a ROI of 2% may result in a exit at 2.4% if low was at 2.4% profit
- ROI entries which came into effect on the triggering candle (e.g. `120: 0.02` for 1h candles, from `60: 0.05`) will use the candle's open as exit rate
- Force-exits caused by `<N>=-1` ROI entries use low as exit value, unless N falls on the candle open (e.g. `120: -1` for 1h candles)
- Stoploss exits happen exactly at stoploss price, even if low was lower, but the loss will be `2 * fees` higher than the stoploss price
- Stoploss is evaluated before ROI within one candle. So you can often see more trades with the `stoploss` exit reason comparing to the results obtained with the same strategy in the Dry Run/Live Trade modes

View File

@ -251,8 +251,8 @@ Mandatory parameters are marked as **Required**, which means that they are requi
| `db_url` | Declares database URL to use. NOTE: This defaults to `sqlite:///tradesv3.dryrun.sqlite` if `dry_run` is `true`, and to `sqlite:///tradesv3.sqlite` for production instances. <br> **Datatype:** String, SQLAlchemy connect string
| `logfile` | Specifies logfile name. Uses a rolling strategy for log file rotation for 10 files with the 1MB limit per file. <br> **Datatype:** String
| `add_config_files` | Additional config files. These files will be loaded and merged with the current config file. The files are resolved relative to the initial file.<br> *Defaults to `[]`*. <br> **Datatype:** List of strings
| `dataformat_ohlcv` | Data format to use to store historical candle (OHLCV) data. <br> *Defaults to `json`*. <br> **Datatype:** String
| `dataformat_trades` | Data format to use to store historical trades data. <br> *Defaults to `jsongz`*. <br> **Datatype:** String
| `dataformat_ohlcv` | Data format to use to store historical candle (OHLCV) data. <br> *Defaults to `feather`*. <br> **Datatype:** String
| `dataformat_trades` | Data format to use to store historical trades data. <br> *Defaults to `feather`*. <br> **Datatype:** String
| `reduce_df_footprint` | Recast all numeric columns to float32/int32, with the objective of reducing ram/disk usage (and decreasing train/inference timing in FreqAI). (Currently only affects FreqAI use-cases) <br> **Datatype:** Boolean. <br> Default: `False`.
### Parameters in the strategy

View File

@ -27,11 +27,11 @@ usage: freqtrade download-data [-h] [-v] [--logfile FILE] [-V] [-c PATH]
[--exchange EXCHANGE]
[-t TIMEFRAMES [TIMEFRAMES ...]] [--erase]
[--data-format-ohlcv {json,jsongz,hdf5,feather,parquet}]
[--data-format-trades {json,jsongz,hdf5}]
[--data-format-trades {json,jsongz,hdf5,feather}]
[--trading-mode {spot,margin,futures}]
[--prepend]
optional arguments:
options:
-h, --help show this help message and exit
-p PAIRS [PAIRS ...], --pairs PAIRS [PAIRS ...]
Limit command to these pairs. Pairs are space-
@ -48,8 +48,7 @@ optional arguments:
--dl-trades Download trades instead of OHLCV data. The bot will
resample trades to the desired timeframe as specified
as --timeframes/-t.
--exchange EXCHANGE Exchange name (default: `bittrex`). Only valid if no
config is provided.
--exchange EXCHANGE Exchange name. Only valid if no config is provided.
-t TIMEFRAMES [TIMEFRAMES ...], --timeframes TIMEFRAMES [TIMEFRAMES ...]
Specify which tickers to download. Space-separated
list. Default: `1m 5m`.
@ -57,17 +56,18 @@ optional arguments:
exchange/pairs/timeframes.
--data-format-ohlcv {json,jsongz,hdf5,feather,parquet}
Storage format for downloaded candle (OHLCV) data.
(default: `json`).
--data-format-trades {json,jsongz,hdf5}
(default: `feather`).
--data-format-trades {json,jsongz,hdf5,feather}
Storage format for downloaded trades data. (default:
`jsongz`).
`feather`).
--trading-mode {spot,margin,futures}, --tradingmode {spot,margin,futures}
Select Trading mode
--prepend Allow data prepending. (Data-appending is disabled)
Common arguments:
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
--logfile FILE Log to the file specified. Special values are:
--logfile FILE, --log-file FILE
Log to the file specified. Special values are:
'syslog', 'journald'. See the documentation for more
details.
-V, --version show program's version number and exit
@ -157,7 +157,7 @@ Freqtrade currently supports the following data-formats:
* `json` - plain "text" json files
* `jsongz` - a gzip-zipped version of json files
* `hdf5` - a high performance datastore
* `feather` - a dataformat based on Apache Arrow (OHLCV only)
* `feather` - a dataformat based on Apache Arrow
* `parquet` - columnar datastore (OHLCV only)
By default, OHLCV data is stored as `json` data, while trades data is stored as `jsongz` data.
@ -255,7 +255,7 @@ usage: freqtrade convert-data [-h] [-v] [--logfile FILE] [-V] [-c PATH]
[--trading-mode {spot,margin,futures}]
[--candle-types {spot,futures,mark,index,premiumIndex,funding_rate} [{spot,futures,mark,index,premiumIndex,funding_rate} ...]]
optional arguments:
options:
-h, --help show this help message and exit
-p PAIRS [PAIRS ...], --pairs PAIRS [PAIRS ...]
Limit command to these pairs. Pairs are space-
@ -266,19 +266,20 @@ optional arguments:
Destination format for data conversion.
--erase Clean all existing data for the selected
exchange/pairs/timeframes.
--exchange EXCHANGE Exchange name (default: `bittrex`). Only valid if no
config is provided.
--exchange EXCHANGE Exchange name. Only valid if no config is provided.
-t TIMEFRAMES [TIMEFRAMES ...], --timeframes TIMEFRAMES [TIMEFRAMES ...]
Specify which tickers to download. Space-separated
list. Default: `1m 5m`.
--trading-mode {spot,margin,futures}, --tradingmode {spot,margin,futures}
Select Trading mode
--candle-types {spot,futures,mark,index,premiumIndex,funding_rate} [{spot,futures,mark,index,premiumIndex,funding_rate} ...]
Select candle type to use
Select candle type to convert. Defaults to all
available types.
Common arguments:
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
--logfile FILE Log to the file specified. Special values are:
--logfile FILE, --log-file FILE
Log to the file specified. Special values are:
'syslog', 'journald'. See the documentation for more
details.
-V, --version show program's version number and exit
@ -291,7 +292,6 @@ Common arguments:
Path to directory with historical backtesting data.
--userdir PATH, --user-data-dir PATH
Path to userdata directory.
```
### Example converting data
@ -314,7 +314,7 @@ usage: freqtrade convert-trade-data [-h] [-v] [--logfile FILE] [-V] [-c PATH]
{json,jsongz,hdf5,feather,parquet}
[--erase] [--exchange EXCHANGE]
optional arguments:
options:
-h, --help show this help message and exit
-p PAIRS [PAIRS ...], --pairs PAIRS [PAIRS ...]
Limit command to these pairs. Pairs are space-
@ -325,12 +325,12 @@ optional arguments:
Destination format for data conversion.
--erase Clean all existing data for the selected
exchange/pairs/timeframes.
--exchange EXCHANGE Exchange name (default: `bittrex`). Only valid if no
config is provided.
--exchange EXCHANGE Exchange name. Only valid if no config is provided.
Common arguments:
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
--logfile FILE Log to the file specified. Special values are:
--logfile FILE, --log-file FILE
Log to the file specified. Special values are:
'syslog', 'journald'. See the documentation for more
details.
-V, --version show program's version number and exit
@ -367,9 +367,9 @@ usage: freqtrade trades-to-ohlcv [-h] [-v] [--logfile FILE] [-V] [-c PATH]
[-t TIMEFRAMES [TIMEFRAMES ...]]
[--exchange EXCHANGE]
[--data-format-ohlcv {json,jsongz,hdf5,feather,parquet}]
[--data-format-trades {json,jsongz,hdf5}]
[--data-format-trades {json,jsongz,hdf5,feather}]
optional arguments:
options:
-h, --help show this help message and exit
-p PAIRS [PAIRS ...], --pairs PAIRS [PAIRS ...]
Limit command to these pairs. Pairs are space-
@ -377,18 +377,18 @@ optional arguments:
-t TIMEFRAMES [TIMEFRAMES ...], --timeframes TIMEFRAMES [TIMEFRAMES ...]
Specify which tickers to download. Space-separated
list. Default: `1m 5m`.
--exchange EXCHANGE Exchange name (default: `bittrex`). Only valid if no
config is provided.
--exchange EXCHANGE Exchange name. Only valid if no config is provided.
--data-format-ohlcv {json,jsongz,hdf5,feather,parquet}
Storage format for downloaded candle (OHLCV) data.
(default: `json`).
--data-format-trades {json,jsongz,hdf5}
(default: `feather`).
--data-format-trades {json,jsongz,hdf5,feather}
Storage format for downloaded trades data. (default:
`jsongz`).
`feather`).
Common arguments:
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
--logfile FILE Log to the file specified. Special values are:
--logfile FILE, --log-file FILE
Log to the file specified. Special values are:
'syslog', 'journald'. See the documentation for more
details.
-V, --version show program's version number and exit
@ -422,13 +422,12 @@ usage: freqtrade list-data [-h] [-v] [--logfile FILE] [-V] [-c PATH] [-d PATH]
[--trading-mode {spot,margin,futures}]
[--show-timerange]
optional arguments:
options:
-h, --help show this help message and exit
--exchange EXCHANGE Exchange name (default: `bittrex`). Only valid if no
config is provided.
--exchange EXCHANGE Exchange name. Only valid if no config is provided.
--data-format-ohlcv {json,jsongz,hdf5,feather,parquet}
Storage format for downloaded candle (OHLCV) data.
(default: `json`).
(default: `feather`).
-p PAIRS [PAIRS ...], --pairs PAIRS [PAIRS ...]
Limit command to these pairs. Pairs are space-
separated.
@ -439,7 +438,8 @@ optional arguments:
Common arguments:
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
--logfile FILE Log to the file specified. Special values are:
--logfile FILE, --log-file FILE
Log to the file specified. Special values are:
'syslog', 'journald'. See the documentation for more
details.
-V, --version show program's version number and exit
@ -474,7 +474,7 @@ ETH/USDT 5m, 15m, 30m, 1h, 2h, 4h
By default, `download-data` sub-command downloads Candles (OHLCV) data. Some exchanges also provide historic trade-data via their API.
This data can be useful if you need many different timeframes, since it is only downloaded once, and then resampled locally to the desired timeframes.
Since this data is large by default, the files use gzip by default. They are stored in your data-directory with the naming convention of `<pair>-trades.json.gz` (`ETH_BTC-trades.json.gz`). Incremental mode is also supported, as for historic OHLCV data, so downloading the data once per week with `--days 8` will create an incremental data-repository.
Since this data is large by default, the files use the feather fileformat by default. They are stored in your data-directory with the naming convention of `<pair>-trades.feather` (`ETH_BTC-trades.feather`). Incremental mode is also supported, as for historic OHLCV data, so downloading the data once per week with `--days 8` will create an incremental data-repository.
To use this mode, simply add `--dl-trades` to your call. This will swap the download method to download trades, and resamples the data locally.

View File

@ -259,10 +259,17 @@ The configuration parameter `exchange.unknown_fee_rate` can be used to specify t
Futures trading on bybit is currently supported for USDT markets, and will use isolated futures mode.
Users with unified accounts (there's no way back) can create a Sub-account which will start as "non-unified", and can therefore use isolated futures.
On startup, freqtrade will set the position mode to "One-way Mode" for the whole (sub)account. This avoids making this call over and over again (slowing down bot operations), but means that changes to this setting may result in exceptions and errors.
On startup, freqtrade will set the position mode to "One-way Mode" for the whole (sub)account. This avoids making this call over and over again (slowing down bot operations), but means that changes to this setting may result in exceptions and errors
As bybit doesn't provide funding rate history, the dry-run calculation is used for live trades as well.
API Keys for live futures trading (Subaccount on non-unified) must have the following permissions:
* Read-write
* Contract - Orders
* Contract - Positions
We do strongly recommend to limit all API keys to the IP you're going to use it from.
!!! Tip "Stoploss on Exchange"
Bybit (futures only) supports `stoploss_on_exchange` and uses `stop-loss-limit` orders. It provides great advantages, so we recommend to benefit from it by enabling stoploss on exchange.
On futures, Bybit supports both `stop-limit` as well as `stop-market` orders. You can use either `"limit"` or `"market"` in the `order_types.stoploss` configuration setting to decide which type to use.

View File

@ -261,7 +261,7 @@ class MyFreqaiModel(BaseRegressionModel):
"""
feature_pipeline = Pipeline([
('qt', SKLearnWrapper(QuantileTransformer(output_distribution='normal'))),
('di', ds.DissimilarityIndex(di_threshold=1)
('di', ds.DissimilarityIndex(di_threshold=1))
])
return feature_pipeline

View File

@ -42,7 +42,6 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
| `use_SVM_to_remove_outliers` | Train a support vector machine to detect and remove outliers from the training dataset, as well as from incoming data points. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-using-a-support-vector-machine-svm). <br> **Datatype:** Boolean.
| `svm_params` | All parameters available in Sklearn's `SGDOneClassSVM()`. See details about some select parameters [here](freqai-feature-engineering.md#identifying-outliers-using-a-support-vector-machine-svm). <br> **Datatype:** Dictionary.
| `use_DBSCAN_to_remove_outliers` | Cluster data using the DBSCAN algorithm to identify and remove outliers from training and prediction data. See details about how it works [here](freqai-feature-engineering.md#identifying-outliers-with-dbscan). <br> **Datatype:** Boolean.
| `inlier_metric_window` | If set, FreqAI adds an `inlier_metric` to the training feature set and set the lookback to be the `inlier_metric_window`, i.e., the number of previous time points to compare the current candle to. Details of how the `inlier_metric` is computed can be found [here](freqai-feature-engineering.md#inlier-metric). <br> **Datatype:** Integer. <br> Default: `0`.
| `noise_standard_deviation` | If set, FreqAI adds noise to the training features with the aim of preventing overfitting. FreqAI generates random deviates from a gaussian distribution with a standard deviation of `noise_standard_deviation` and adds them to all data points. `noise_standard_deviation` should be kept relative to the normalized space, i.e., between -1 and 1. In other words, since data in FreqAI is always normalized to be between -1 and 1, `noise_standard_deviation: 0.05` would result in 32% of the data being randomly increased/decreased by more than 2.5% (i.e., the percent of data falling within the first standard deviation). <br> **Datatype:** Integer. <br> Default: `0`.
| `outlier_protection_percentage` | Enable to prevent outlier detection methods from discarding too much data. If more than `outlier_protection_percentage` % of points are detected as outliers by the SVM or DBSCAN, FreqAI will log a warning message and ignore outlier detection, i.e., the original dataset will be kept intact. If the outlier protection is triggered, no predictions will be made based on the training dataset. <br> **Datatype:** Float. <br> Default: `30`.
| `reverse_train_test_order` | Split the feature dataset (see below) and use the latest data split for training and test on historical split of the data. This allows the model to be trained up to the most recent data point, while avoiding overfitting. However, you should be careful to understand the unorthodox nature of this parameter before employing it. <br> **Datatype:** Boolean. <br> Default: `False` (no reversal).

View File

@ -433,9 +433,14 @@ While this strategy is most likely too simple to provide consistent profit, it s
`range` property may also be used with `DecimalParameter` and `CategoricalParameter`. `RealParameter` does not provide this property due to infinite search space.
??? Hint "Performance tip"
During normal hyperopting, indicators are calculated once and supplied to each epoch, linearly increasing RAM usage as a factor of increasing cores. As this also has performance implications, hyperopt provides `--analyze-per-epoch` which will move the execution of `populate_indicators()` to the epoch process, calculating a single value per parameter per epoch instead of using the `.range` functionality. In this case, `.range` functionality will only return the actually used value. This will reduce RAM usage, but increase CPU usage. However, your hyperopting run will be less likely to fail due to Out Of Memory (OOM) issues.
During normal hyperopting, indicators are calculated once and supplied to each epoch, linearly increasing RAM usage as a factor of increasing cores. As this also has performance implications, there are two alternatives to reduce RAM usage
In either case, you should try to use space ranges as small as possible this will improve CPU/RAM usage in both scenarios.
* Move `ema_short` and `ema_long` calculations from `populate_indicators()` to `populate_entry_trend()`. Since `populate_entry_trend()` gonna be calculated every epochs, you don't need to use `.range` functionality.
* hyperopt provides `--analyze-per-epoch` which will move the execution of `populate_indicators()` to the epoch process, calculating a single value per parameter per epoch instead of using the `.range` functionality. In this case, `.range` functionality will only return the actually used value.
These alternatives will reduce RAM usage, but increase CPU usage. However, your hyperopting run will be less likely to fail due to Out Of Memory (OOM) issues.
Whether you are using `.range` functionality or the alternatives above, you should try to use space ranges as small as possible since this will improve CPU/RAM usage.
## Optimizing protections

View File

@ -211,7 +211,7 @@ The user is responsible for providing a server or local file that returns a JSON
```json
{
"pairs": ["XRP/USDT", "ETH/USDT", "LTC/USDT"],
"refresh_period": 1800,
"refresh_period": 1800
}
```

11
docs/includes/showcase.md Normal file
View File

@ -0,0 +1,11 @@
This section will highlight a few projects from members of the community.
!!! Note
The projects below are for the most part not maintained by the freqtrade , therefore use your own caution before using them.
- [Example freqtrade strategies](https://github.com/freqtrade/freqtrade-strategies/)
- [FrequentHippo - Grafana dashboard with dry/live runs and backtests](http://frequenthippo.ddns.net:3000/) (by hippocritical).
- [Online pairlist generator](https://remotepairlist.com/) (by Blood4rc).
- [Freqtrade Backtesting Project](https://bt.robot.co.network/) (by Blood4rc).
- [Freqtrade analysis notebook](https://github.com/froggleston/freqtrade_analysis_notebook) (by Froggleston).
- [TUI for freqtrade](https://github.com/froggleston/freqtrade-frogtrade9000) (by Froggleston).
- [Bot Academy](https://botacademy.ddns.net/) (by stash86) - Blog about crypto bot projects.

View File

@ -63,6 +63,10 @@ Exchanges confirmed working by the community:
- [X] [Bitvavo](https://bitvavo.com/)
- [X] [Kucoin](https://www.kucoin.com/)
## Community showcase
--8<-- "includes/showcase.md"
## Requirements
### Hardware requirements

View File

@ -1,6 +1,6 @@
markdown==3.3.7
mkdocs==1.4.3
mkdocs-material==9.1.18
markdown==3.4.4
mkdocs==1.5.2
mkdocs-material==9.1.21
mdx_truly_sane_lists==1.3
pymdown-extensions==10.0.1
pymdown-extensions==10.1
jinja2==3.1.2

View File

@ -750,7 +750,7 @@ class DigDeeperStrategy(IStrategy):
# Hope you have a deep wallet!
try:
# This returns first order stake size
stake_amount = filled_entries[0].cost
stake_amount = filled_entries[0].stake_amount
# This then calculates current safety order size
stake_amount = stake_amount * (1 + (count_of_entries * 0.25))
return stake_amount

View File

@ -287,12 +287,17 @@ Return a summary of your profit/loss and performance.
> **Best Performing:** `PAY/BTC: 50.23%`
> **Trading volume:** `0.5 BTC`
> **Profit factor:** `1.04`
> **Win / Loss:** `102 / 36`
> **Winrate:** `73.91%`
> **Expectancy (Ratio):** `4.87 (1.66)`
> **Max Drawdown:** `9.23% (0.01255 BTC)`
The relative profit of `1.2%` is the average profit per trade.
The relative profit of `15.2 Σ%` is be based on the starting capital - so in this case, the starting capital was `0.00485701 * 1.152 = 0.00738 BTC`.
Starting capital is either taken from the `available_capital` setting, or calculated by using current wallet size - profits.
Profit Factor is calculated as gross profits / gross losses - and should serve as an overall metric for the strategy.
Expectancy corresponds to the average return per currency unit at risk, i.e. the winrate and the risk-reward ratio (the average gain of winning trades compared to the average loss of losing trades).
Expectancy Ratio is expected profit or loss of a subsequent trade based on the performance of all past trades.
Max drawdown corresponds to the backtesting metric `Absolute Drawdown (Account)` - calculated as `(Absolute Drawdown) / (DrawdownHigh + startingBalance)`.
Bot started date will refer to the date the bot was first started. For older bots, this will default to the first trade's open date.

View File

@ -141,7 +141,8 @@ Most properties here can be None as they are dependant on the exchange response.
`amount` | float | Amount in base currency
`filled` | float | Filled amount (in base currency)
`remaining` | float | Remaining amount
`cost` | float | Cost of the order - usually average * filled
`cost` | float | Cost of the order - usually average * filled (*Exchange dependant on futures, may contain the cost with or without leverage and may be in contracts.*)
`stake_amount` | float | Stake amount used for this order. *Added in 2023.7.*
`order_date` | datetime | Order creation date **use `order_date_utc` instead**
`order_date_utc` | datetime | Order creation date (in UTC)
`order_fill_date` | datetime | Order fill date **use `order_fill_utc` instead**

View File

@ -80,12 +80,18 @@ When using the Form-Encoded or JSON-Encoded configuration you can configure any
The result would be a POST request with e.g. `Status: running` body and `Content-Type: text/plain` header.
Optional parameters are available to enable automatic retries for webhook messages. The `webhook.retries` parameter can be set for the maximum number of retries the webhook request should attempt if it is unsuccessful (i.e. HTTP response status is not 200). By default this is set to `0` which is disabled. An additional `webhook.retry_delay` parameter can be set to specify the time in seconds between retry attempts. By default this is set to `0.1` (i.e. 100ms). Note that increasing the number of retries or retry delay may slow down the trader if there are connectivity issues with the webhook. Example configuration for retries:
## Additional configurations
The `webhook.retries` parameter can be set for the maximum number of retries the webhook request should attempt if it is unsuccessful (i.e. HTTP response status is not 200). By default this is set to `0` which is disabled. An additional `webhook.retry_delay` parameter can be set to specify the time in seconds between retry attempts. By default this is set to `0.1` (i.e. 100ms). Note that increasing the number of retries or retry delay may slow down the trader if there are connectivity issues with the webhook.
You can also specify `webhook.timeout` - which defines how long the bot will wait until it assumes the other host as unresponsive (defaults to 10s).
Example configuration for retries:
```json
"webhook": {
"enabled": true,
"url": "https://<YOURHOOKURL>",
"timeout": 10,
"retries": 3,
"retry_delay": 0.2,
"status": {
@ -109,6 +115,8 @@ Custom messages can be sent to Webhook endpoints via the `self.dp.send_msg()` fu
Different payloads can be configured for different events. Not all fields are necessary, but you should configure at least one of the dicts, otherwise the webhook will never be called.
## Webhook Message types
### Entry
The fields in `webhook.entry` are filled when the bot executes a long/short. Parameters are filled using string.format.

View File

@ -1,5 +1,5 @@
""" Freqtrade bot """
__version__ = '2023.7.dev'
__version__ = '2023.8.dev'
if 'dev' in __version__:
from pathlib import Path

View File

@ -5,6 +5,7 @@ from typing import Any, Dict, List
from questionary import Separator, prompt
from freqtrade.configuration.detect_environment import running_in_docker
from freqtrade.configuration.directory_operations import chown_user_directory
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
from freqtrade.exceptions import OperationalException
@ -179,7 +180,7 @@ def ask_user_config() -> Dict[str, Any]:
"name": "api_server_listen_addr",
"message": ("Insert Api server Listen Address (0.0.0.0 for docker, "
"otherwise best left untouched)"),
"default": "127.0.0.1",
"default": "127.0.0.1" if not running_in_docker() else "0.0.0.0",
"when": lambda x: x['api_server']
},
{

View File

@ -435,12 +435,12 @@ AVAILABLE_CLI_OPTIONS = {
),
"dataformat_ohlcv": Arg(
'--data-format-ohlcv',
help='Storage format for downloaded candle (OHLCV) data. (default: `json`).',
help='Storage format for downloaded candle (OHLCV) data. (default: `feather`).',
choices=constants.AVAILABLE_DATAHANDLERS,
),
"dataformat_trades": Arg(
'--data-format-trades',
help='Storage format for downloaded trades data. (default: `jsongz`).',
help='Storage format for downloaded trades data. (default: `feather`).',
choices=constants.AVAILABLE_DATAHANDLERS_TRADES,
),
"show_timerange": Arg(

View File

@ -3,4 +3,5 @@
from freqtrade.configuration.config_setup import setup_utils_configuration
from freqtrade.configuration.config_validation import validate_config_consistency
from freqtrade.configuration.configuration import Configuration
from freqtrade.configuration.detect_environment import running_in_docker
from freqtrade.configuration.timerange import TimeRange

View File

@ -0,0 +1,8 @@
import os
def running_in_docker() -> bool:
"""
Check if we are running in a docker container
"""
return os.environ.get('FT_APP_ENV') == 'docker'

View File

@ -3,6 +3,7 @@ import shutil
from pathlib import Path
from typing import Optional
from freqtrade.configuration.detect_environment import running_in_docker
from freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,
USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)
from freqtrade.exceptions import OperationalException
@ -30,8 +31,7 @@ def chown_user_directory(directory: Path) -> None:
Use Sudo to change permissions of the home-directory if necessary
Only applies when running in docker!
"""
import os
if os.environ.get('FT_APP_ENV') == 'docker':
if running_in_docker():
try:
import subprocess
subprocess.check_output(

View File

@ -153,7 +153,7 @@ CONF_SCHEMA = {
},
},
'amount_reserve_percent': {'type': 'number', 'minimum': 0.0, 'maximum': 0.5},
'stoploss': {'type': 'number', 'maximum': 0, 'exclusiveMaximum': True, 'minimum': -1},
'stoploss': {'type': 'number', 'maximum': 0, 'exclusiveMaximum': True},
'trailing_stop': {'type': 'boolean'},
'trailing_stop_positive': {'type': 'number', 'minimum': 0, 'maximum': 1},
'trailing_stop_positive_offset': {'type': 'number', 'minimum': 0, 'maximum': 1},
@ -446,12 +446,12 @@ CONF_SCHEMA = {
'dataformat_ohlcv': {
'type': 'string',
'enum': AVAILABLE_DATAHANDLERS,
'default': 'json'
'default': 'feather'
},
'dataformat_trades': {
'type': 'string',
'enum': AVAILABLE_DATAHANDLERS_TRADES,
'default': 'jsongz'
'default': 'feather'
},
'position_adjustment_enable': {'type': 'boolean'},
'max_entry_position_adjustment': {'type': ['integer', 'number'], 'minimum': -1},

View File

@ -5,16 +5,17 @@ import logging
from copy import copy
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from typing import Any, Dict, List, Literal, Optional, Union
import numpy as np
import pandas as pd
from freqtrade.constants import LAST_BT_RESULT_FN, IntOrInf
from freqtrade.exceptions import OperationalException
from freqtrade.misc import json_load
from freqtrade.misc import file_dump_json, json_load
from freqtrade.optimize.backtest_caching import get_backtest_metadata_filename
from freqtrade.persistence import LocalTrade, Trade, init_db
from freqtrade.types import BacktestHistoryEntryType, BacktestResultType
logger = logging.getLogger(__name__)
@ -128,7 +129,7 @@ def load_backtest_metadata(filename: Union[Path, str]) -> Dict[str, Any]:
raise OperationalException('Unexpected error while loading backtest metadata.') from e
def load_backtest_stats(filename: Union[Path, str]) -> Dict[str, Any]:
def load_backtest_stats(filename: Union[Path, str]) -> BacktestResultType:
"""
Load backtest statistics file.
:param filename: pathlib.Path object, or string pointing to the file.
@ -147,21 +148,21 @@ def load_backtest_stats(filename: Union[Path, str]) -> Dict[str, Any]:
# Legacy list format does not contain metadata.
if isinstance(data, dict):
data['metadata'] = load_backtest_metadata(filename)
return data
def load_and_merge_backtest_result(strategy_name: str, filename: Path, results: Dict[str, Any]):
"""
Load one strategy from multi-strategy result
and merge it with results
Load one strategy from multi-strategy result and merge it with results
:param strategy_name: Name of the strategy contained in the result
:param filename: Backtest-result-filename to load
:param results: dict to merge the result to.
"""
bt_data = load_backtest_stats(filename)
for k in ('metadata', 'strategy'):
k: Literal['metadata', 'strategy']
for k in ('metadata', 'strategy'): # type: ignore
results[k][strategy_name] = bt_data[k][strategy_name]
results['metadata'][strategy_name]['filename'] = filename.stem
comparison = bt_data['strategy_comparison']
for i in range(len(comparison)):
if comparison[i]['key'] == strategy_name:
@ -170,27 +171,67 @@ def load_and_merge_backtest_result(strategy_name: str, filename: Path, results:
def _get_backtest_files(dirname: Path) -> List[Path]:
# Weird glob expression here avoids including .meta.json files.
return list(reversed(sorted(dirname.glob('backtest-result-*-[0-9][0-9].json'))))
def get_backtest_resultlist(dirname: Path):
def get_backtest_result(filename: Path) -> List[BacktestHistoryEntryType]:
"""
Get backtest result read from metadata file
"""
return [
{
'filename': filename.stem,
'strategy': s,
'notes': v.get('notes', ''),
'run_id': v['run_id'],
'backtest_start_time': v['backtest_start_time'],
} for s, v in load_backtest_metadata(filename).items()
]
def get_backtest_resultlist(dirname: Path) -> List[BacktestHistoryEntryType]:
"""
Get list of backtest results read from metadata files
"""
results = []
for filename in _get_backtest_files(dirname):
metadata = load_backtest_metadata(filename)
if not metadata:
continue
for s, v in metadata.items():
results.append({
'filename': filename.name,
'strategy': s,
'run_id': v['run_id'],
'backtest_start_time': v['backtest_start_time'],
return [
{
'filename': filename.stem,
'strategy': s,
'run_id': v['run_id'],
'notes': v.get('notes', ''),
'backtest_start_time': v['backtest_start_time'],
}
for filename in _get_backtest_files(dirname)
for s, v in load_backtest_metadata(filename).items()
if v
]
})
return results
def delete_backtest_result(file_abs: Path):
"""
Delete backtest result file and corresponding metadata file.
"""
# *.meta.json
logger.info(f"Deleting backtest result file: {file_abs.name}")
file_abs_meta = file_abs.with_suffix('.meta.json')
file_abs.unlink()
file_abs_meta.unlink()
def update_backtest_metadata(filename: Path, strategy: str, content: Dict[str, Any]):
"""
Updates backtest metadata file with new content.
:raises: ValueError if metadata file does not exist, or strategy is not in this file.
"""
metadata = load_backtest_metadata(filename)
if not metadata:
raise ValueError("File does not exist.")
if strategy not in metadata:
raise ValueError("Strategy not in metadata.")
metadata[strategy].update(content)
# Write data again.
file_dump_json(get_backtest_metadata_filename(filename), metadata)
def find_existing_backtest_stats(dirname: Union[Path, str], run_ids: Dict[str, str],
@ -211,7 +252,6 @@ def find_existing_backtest_stats(dirname: Union[Path, str], run_ids: Dict[str, s
'strategy_comparison': [],
}
# Weird glob expression here avoids including .meta.json files.
for filename in _get_backtest_files(dirname):
metadata = load_backtest_metadata(filename)
if not metadata:

View File

@ -96,8 +96,14 @@ def ohlcv_fill_up_missing_data(dataframe: DataFrame, timeframe: str, pair: str)
'volume': 'sum'
}
timeframe_minutes = timeframe_to_minutes(timeframe)
resample_interval = f'{timeframe_minutes}min'
if timeframe_minutes >= 43200 and timeframe_minutes < 525600:
# Monthly candles need special treatment to stick to the 1st of the month
resample_interval = f'{timeframe}S'
elif timeframe_minutes > 43200:
resample_interval = timeframe
# Resample to create "NAN" values
df = dataframe.resample(f'{timeframe_minutes}min', on='date').agg(ohlcv_dict)
df = dataframe.resample(resample_interval, on='date').agg(ohlcv_dict)
# Forwardfill close for missing columns
df['close'] = df['close'].fillna(method='ffill')
@ -122,7 +128,7 @@ def ohlcv_fill_up_missing_data(dataframe: DataFrame, timeframe: str, pair: str)
return df
def trim_dataframe(df: DataFrame, timerange, df_date_col: str = 'date',
def trim_dataframe(df: DataFrame, timerange, *, df_date_col: str = 'date',
startup_candles: int = 0) -> DataFrame:
"""
Trim dataframe based on given timerange

View File

@ -310,7 +310,7 @@ class DataProvider:
timeframe=timeframe or self._config['timeframe'],
datadir=self._config['datadir'],
timerange=timerange,
data_format=self._config.get('dataformat_ohlcv', 'json'),
data_format=self._config['dataformat_ohlcv'],
candle_type=_candle_type,
)

View File

@ -69,7 +69,7 @@ def load_data(datadir: Path,
fill_up_missing: bool = True,
startup_candles: int = 0,
fail_without_data: bool = False,
data_format: str = 'json',
data_format: str = 'feather',
candle_type: CandleType = CandleType.SPOT,
user_futures_funding_rate: Optional[int] = None,
) -> Dict[str, DataFrame]:
@ -394,7 +394,7 @@ def _download_trades_history(exchange: Exchange,
def refresh_backtest_trades_data(exchange: Exchange, pairs: List[str], datadir: Path,
timerange: TimeRange, new_pairs_days: int = 30,
erase: bool = False, data_format: str = 'jsongz') -> List[str]:
erase: bool = False, data_format: str = 'feather') -> List[str]:
"""
Refresh stored trades data for backtesting and hyperopt operations.
Used by freqtrade download-data subcommand.
@ -427,8 +427,8 @@ def convert_trades_to_ohlcv(
datadir: Path,
timerange: TimeRange,
erase: bool = False,
data_format_ohlcv: str = 'json',
data_format_trades: str = 'jsongz',
data_format_ohlcv: str = 'feather',
data_format_trades: str = 'feather',
candle_type: CandleType = CandleType.SPOT
) -> None:
"""

View File

@ -427,6 +427,6 @@ def get_datahandler(datadir: Path, data_format: Optional[str] = None,
"""
if not data_handler:
HandlerClass = get_datahandlerclass(data_format or 'json')
HandlerClass = get_datahandlerclass(data_format or 'feather')
data_handler = HandlerClass(datadir)
return data_handler

View File

@ -194,32 +194,35 @@ def calculate_cagr(days_passed: int, starting_balance: float, final_balance: flo
return (final_balance / starting_balance) ** (1 / (days_passed / 365)) - 1
def calculate_expectancy(trades: pd.DataFrame) -> float:
def calculate_expectancy(trades: pd.DataFrame) -> Tuple[float, float]:
"""
Calculate expectancy
:param trades: DataFrame containing trades (requires columns close_date and profit_abs)
:return: expectancy
:return: expectancy, expectancy_ratio
"""
if len(trades) == 0:
return 0
expectancy = 1
expectancy = 0
expectancy_ratio = 100
profit_sum = trades.loc[trades['profit_abs'] > 0, 'profit_abs'].sum()
loss_sum = abs(trades.loc[trades['profit_abs'] < 0, 'profit_abs'].sum())
nb_win_trades = len(trades.loc[trades['profit_abs'] > 0])
nb_loss_trades = len(trades.loc[trades['profit_abs'] < 0])
if len(trades) > 0:
winning_trades = trades.loc[trades['profit_abs'] > 0]
losing_trades = trades.loc[trades['profit_abs'] < 0]
profit_sum = winning_trades['profit_abs'].sum()
loss_sum = abs(losing_trades['profit_abs'].sum())
nb_win_trades = len(winning_trades)
nb_loss_trades = len(losing_trades)
if (nb_win_trades > 0) and (nb_loss_trades > 0):
average_win = profit_sum / nb_win_trades
average_loss = loss_sum / nb_loss_trades
risk_reward_ratio = average_win / average_loss
winrate = nb_win_trades / len(trades)
expectancy = ((1 + risk_reward_ratio) * winrate) - 1
elif nb_win_trades == 0:
expectancy = 0
average_win = (profit_sum / nb_win_trades) if nb_win_trades > 0 else 0
average_loss = (loss_sum / nb_loss_trades) if nb_loss_trades > 0 else 0
winrate = (nb_win_trades / len(trades))
loserate = (nb_loss_trades / len(trades))
return expectancy
expectancy = (winrate * average_win) - (loserate * average_loss)
if (average_loss > 0):
risk_reward_ratio = average_win / average_loss
expectancy_ratio = ((1 + risk_reward_ratio) * winrate) - 1
return expectancy, expectancy_ratio
def calculate_sortino(trades: pd.DataFrame, min_date: datetime, max_date: datetime,

View File

@ -115,7 +115,7 @@ class Edge:
exchange=self.exchange,
timeframe=self.strategy.timeframe,
timerange=timerange_startup,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT),
)
# Download informative pairs too
@ -132,7 +132,7 @@ class Edge:
exchange=self.exchange,
timeframe=timeframe,
timerange=timerange_startup,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT),
)
@ -142,7 +142,7 @@ class Edge:
timeframe=self.strategy.timeframe,
timerange=self._timerange,
startup_candles=self.strategy.startup_candle_count,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT),
)
@ -172,13 +172,7 @@ class Edge:
pair_data = pair_data.sort_values(by=['date'])
pair_data = pair_data.reset_index(drop=True)
df_analyzed = self.strategy.advise_exit(
dataframe=self.strategy.advise_entry(
dataframe=pair_data,
metadata={'pair': pair}
),
metadata={'pair': pair}
)[headers].copy()
df_analyzed = self.strategy.ft_advise_signals(pair_data, {'pair': pair})[headers].copy()
trades += self._find_trades_for_stoploss_range(df_analyzed, pair, self._stoploss_range)

View File

@ -34,6 +34,7 @@ class Binance(Exchange):
"tickers_have_price": False,
"floor_leverage": True,
"stop_price_type_field": "workingType",
"order_props_in_contracts": ['amount', 'cost', 'filled', 'remaining'],
"stop_price_type_value_mapping": {
PriceType.LAST: "CONTRACT_PRICE",
PriceType.MARK: "MARK_PRICE",

File diff suppressed because it is too large Load Diff

View File

@ -80,9 +80,8 @@ class Exchange:
"mark_ohlcv_price": "mark",
"mark_ohlcv_timeframe": "8h",
"ccxt_futures_name": "swap",
"fee_cost_in_contracts": False, # Fee cost needs contract conversion
"needs_trading_fees": False, # use fetch_trading_fees to cache fees
"order_props_in_contracts": ['amount', 'cost', 'filled', 'remaining'],
"order_props_in_contracts": ['amount', 'filled', 'remaining'],
# Override createMarketBuyOrderRequiresPrice where ccxt has it wrong
"marketOrderRequiresPrice": False,
}
@ -1859,9 +1858,6 @@ class Exchange:
if fee_curr is None:
return None
fee_cost = float(fee['cost'])
if self._ft_has['fee_cost_in_contracts']:
# Convert cost via "contracts" conversion
fee_cost = self._contracts_to_amount(symbol, fee['cost'])
# Calculate fee based on order details
if fee_curr == self.get_pair_base_currency(symbol):

View File

@ -33,8 +33,6 @@ class Gate(Exchange):
_ft_has_futures: Dict = {
"needs_trading_fees": True,
"marketOrderRequiresPrice": False,
"fee_cost_in_contracts": False, # Set explicitly to false for clarity
"order_props_in_contracts": ['amount', 'filled', 'remaining'],
"stop_price_type_field": "price_type",
"stop_price_type_value_mapping": {
PriceType.LAST: 0,

View File

@ -32,7 +32,6 @@ class Okx(Exchange):
}
_ft_has_futures: Dict = {
"tickers_have_quoteVolume": False,
"fee_cost_in_contracts": True,
"stop_price_type_field": "slTriggerPxType",
"stop_price_type_value_mapping": {
PriceType.LAST: "last",

View File

@ -635,7 +635,7 @@ class FreqaiDataDrawer:
timeframe=tf,
pair=pair,
timerange=timerange,
data_format=self.config.get("dataformat_ohlcv", "json"),
data_format=self.config.get("dataformat_ohlcv", "feather"),
candle_type=self.config.get("candle_type_def", CandleType.SPOT),
)

View File

@ -86,8 +86,6 @@ class IFreqaiModel(ABC):
logger.warning("DI threshold is not configured for Keras models yet. Deactivating.")
self.CONV_WIDTH = self.freqai_info.get('conv_width', 1)
if self.ft_params.get("inlier_metric_window", 0):
self.CONV_WIDTH = self.ft_params.get("inlier_metric_window", 0) * 2
self.class_names: List[str] = [] # used in classification subclasses
self.pair_it = 0
self.pair_it_train = 0
@ -676,15 +674,6 @@ class IFreqaiModel(ABC):
hist_preds_df['close_price'] = strat_df['close']
hist_preds_df['date_pred'] = strat_df['date']
# # for keras type models, the conv_window needs to be prepended so
# # viewing is correct in frequi
if self.ft_params.get('inlier_metric_window', 0):
n_lost_points = self.freqai_info.get('conv_width', 2)
zeros_df = DataFrame(np.zeros((n_lost_points, len(hist_preds_df.columns))),
columns=hist_preds_df.columns)
self.dd.historic_predictions[pair] = pd.concat(
[zeros_df, hist_preds_df], axis=0, ignore_index=True)
def fit_live_predictions(self, dk: FreqaiDataKitchen, pair: str) -> None:
"""
Fit the labels with a gaussian distribution

View File

@ -32,8 +32,8 @@ class LightGBMClassifier(BaseClassifierModel):
eval_set = None
test_weights = None
else:
eval_set = (data_dictionary["test_features"].to_numpy(),
data_dictionary["test_labels"].to_numpy()[:, 0])
eval_set = [(data_dictionary["test_features"].to_numpy(),
data_dictionary["test_labels"].to_numpy()[:, 0])]
test_weights = data_dictionary["test_weights"]
X = data_dictionary["train_features"].to_numpy()
y = data_dictionary["train_labels"].to_numpy()[:, 0]
@ -42,7 +42,6 @@ class LightGBMClassifier(BaseClassifierModel):
init_model = self.get_init_model(dk.pair)
model = LGBMClassifier(**self.model_training_parameters)
model.fit(X=X, y=y, eval_set=eval_set, sample_weight=train_weights,
eval_sample_weight=[test_weights], init_model=init_model)

View File

@ -32,7 +32,7 @@ class LightGBMRegressor(BaseRegressionModel):
eval_set = None
eval_weights = None
else:
eval_set = (data_dictionary["test_features"], data_dictionary["test_labels"])
eval_set = [(data_dictionary["test_features"], data_dictionary["test_labels"])]
eval_weights = data_dictionary["test_weights"]
X = data_dictionary["train_features"]
y = data_dictionary["train_labels"]

View File

@ -42,10 +42,10 @@ class LightGBMRegressorMultiTarget(BaseRegressionModel):
eval_weights = [data_dictionary["test_weights"]]
eval_sets = [(None, None)] * data_dictionary['test_labels'].shape[1] # type: ignore
for i in range(data_dictionary['test_labels'].shape[1]):
eval_sets[i] = ( # type: ignore
eval_sets[i] = [( # type: ignore
data_dictionary["test_features"],
data_dictionary["test_labels"].iloc[:, i]
)
)]
init_model = self.get_init_model(dk.pair)
if init_model:

View File

@ -50,7 +50,7 @@ def download_all_data_for_training(dp: DataProvider, config: Config) -> None:
timerange=timerange,
new_pairs_days=new_pairs_days,
erase=False,
data_format=config.get("dataformat_ohlcv", "json"),
data_format=config.get("dataformat_ohlcv", "feather"),
trading_mode=config.get("trading_mode", "spot"),
prepend=config.get("prepend_data", False),
)

View File

@ -1383,7 +1383,10 @@ class FreqtradeBot(LoggingMixin):
latest_candle_close_date = timeframe_to_next_date(self.strategy.timeframe,
latest_candle_open_date)
# Check if new candle
if order_obj and latest_candle_close_date > order_obj.order_date_utc:
if (
order_obj and order_obj.side == trade.entry_side
and latest_candle_close_date > order_obj.order_date_utc
):
# New candle
proposed_rate = self.exchange.get_rate(
trade.pair, side='entry', is_short=trade.is_short, refresh=True)
@ -1939,6 +1942,7 @@ class FreqtradeBot(LoggingMixin):
"""
Applies the fee to amount (either from Order or from Trades).
Can eat into dust if more than the required asset is available.
In case of trade adjustment orders, trade.amount will not have been adjusted yet.
Can't happen in Futures mode - where Fees are always in settlement currency,
never in base currency.
"""
@ -1948,6 +1952,10 @@ class FreqtradeBot(LoggingMixin):
# check against remaining amount!
amount_ = trade.amount - amount
if trade.nr_of_successful_entries >= 1 and order_obj.ft_order_side == trade.entry_side:
# In case of rebuy's, trade.amount doesn't contain the amount of the last entry.
amount_ = trade.amount + amount
if fee_abs != 0 and self.wallets.get_free(trade_base_currency) >= amount_:
# Eat into dust if we own more than base currency
logger.info(f"Fee amount for {trade} was in base currency - "
@ -1977,7 +1985,11 @@ class FreqtradeBot(LoggingMixin):
# Init variables
order_amount = safe_value_fallback(order, 'filled', 'amount')
# Only run for closed orders
if trade.fee_updated(order.get('side', '')) or order['status'] == 'open':
if (
trade.fee_updated(order.get('side', ''))
or order['status'] == 'open'
or order_obj.ft_fee_base
):
return None
trade_base_currency = self.exchange.get_pair_base_currency(trade.pair)

View File

@ -116,6 +116,13 @@ def file_load_json(file: Path):
return pairdata
def is_file_in_dir(file: Path, directory: Path) -> bool:
"""
Helper function to check if file is in directory.
"""
return file.is_file() and file.parent.samefile(directory)
def pair_to_filename(pair: str) -> str:
for ch in ['/', ' ', '.', '@', '$', '+', ':']:
pair = pair.replace(ch, '_')

View File

@ -39,6 +39,7 @@ from freqtrade.plugins.protectionmanager import ProtectionManager
from freqtrade.resolvers import ExchangeResolver, StrategyResolver
from freqtrade.strategy.interface import IStrategy
from freqtrade.strategy.strategy_wrapper import strategy_safe_wrapper
from freqtrade.types import BacktestResultType, get_BacktestResultType_default
from freqtrade.util.binance_mig import migrate_binance_futures_data
from freqtrade.wallets import Wallets
@ -77,7 +78,7 @@ class Backtesting:
LoggingMixin.show_output = False
self.config = config
self.results: Dict[str, Any] = {}
self.results: BacktestResultType = get_BacktestResultType_default()
self.trade_id_counter: int = 0
self.order_id_counter: int = 0
@ -239,7 +240,7 @@ class Backtesting:
timerange=self.timerange,
startup_candles=self.config['startup_candle_count'],
fail_without_data=True,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT)
)
@ -268,7 +269,7 @@ class Backtesting:
timerange=self.timerange,
startup_candles=0,
fail_without_data=True,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT)
)
else:
@ -282,7 +283,7 @@ class Backtesting:
timerange=self.timerange,
startup_candles=0,
fail_without_data=True,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=CandleType.FUNDING_RATE
)
@ -294,7 +295,7 @@ class Backtesting:
timerange=self.timerange,
startup_candles=0,
fail_without_data=True,
data_format=self.config.get('dataformat_ohlcv', 'json'),
data_format=self.config['dataformat_ohlcv'],
candle_type=CandleType.from_string(self.exchange.get_option("mark_ohlcv_price"))
)
# Combine data to avoid combining the data per trade.
@ -367,11 +368,7 @@ class Backtesting:
if not pair_data.empty:
# Cleanup from prior runs
pair_data.drop(HEADERS[5:] + ['buy', 'sell'], axis=1, errors='ignore')
df_analyzed = self.strategy.advise_exit(
self.strategy.advise_entry(pair_data, {'pair': pair}),
{'pair': pair}
).copy()
df_analyzed = self.strategy.ft_advise_signals(pair_data, {'pair': pair})
# Trim startup period from analyzed dataframe
df_analyzed = processed[pair] = pair_data = trim_dataframe(
df_analyzed, self.timerange, startup_candles=self.required_startup)
@ -679,6 +676,7 @@ class Backtesting:
remaining=amount,
cost=amount * close_rate,
)
order._trade_bt = trade
trade.orders.append(order)
return trade
@ -901,8 +899,9 @@ class Backtesting:
amount=amount,
filled=0,
remaining=amount,
cost=stake_amount + trade.fee_open,
cost=amount * propose_rate + trade.fee_open,
)
order._trade_bt = trade
trade.orders.append(order)
if pos_adjust and self._get_order_filled(order.ft_price, row):
order.close_bt_order(current_time, trade)
@ -1275,6 +1274,7 @@ class Backtesting:
preprocessed = self.strategy.advise_all_indicators(data)
# Trim startup period from analyzed dataframe
# This only used to determine if trimming would result in an empty dataframe
preprocessed_tmp = trim_dataframes(preprocessed, timerange, self.required_startup)
if not preprocessed_tmp:

View File

@ -446,6 +446,8 @@ class Hyperopt:
preprocessed = self.backtesting.strategy.advise_all_indicators(data)
# Trim startup period from analyzed dataframe to get correct dates for output.
# This is only used to keep track of min/max date after trimming.
# The result is NOT returned from this method, actual trimming happens in backtesting.
trimmed = trim_dataframes(preprocessed, self.timerange, self.backtesting.required_startup)
self.min_date, self.max_date = get_timerange(trimmed)
if not self.market_change:

View File

@ -432,12 +432,10 @@ class HyperoptTools:
for i in range(len(trials)):
if trials.loc[i]['is_profit']:
for j in range(len(trials.loc[i]) - 3):
trials.iat[i, j] = "{}{}{}".format(Fore.GREEN,
str(trials.loc[i][j]), Fore.RESET)
trials.iat[i, j] = f"{Fore.GREEN}{str(trials.loc[i][j])}{Fore.RESET}"
if trials.loc[i]['is_best'] and highlight_best:
for j in range(len(trials.loc[i]) - 3):
trials.iat[i, j] = "{}{}{}".format(Style.BRIGHT,
str(trials.loc[i][j]), Style.RESET_ALL)
trials.iat[i, j] = f"{Style.BRIGHT}{str(trials.loc[i][j])}{Style.RESET_ALL}"
trials = trials.drop(columns=['is_initial_point', 'is_best', 'is_profit', 'is_random'])
if remove_header > 0:

View File

@ -1,5 +1,6 @@
# flake8: noqa: F401
from freqtrade.optimize.optimize_reports.bt_output import (generate_edge_table,
generate_wins_draws_losses,
show_backtest_result,
show_backtest_results,
show_sorted_pairlist,
@ -14,5 +15,4 @@ from freqtrade.optimize.optimize_reports.optimize_reports import (
generate_all_periodic_breakdown_stats, generate_backtest_stats, generate_daily_stats,
generate_exit_reason_stats, generate_pair_metrics, generate_periodic_breakdown_stats,
generate_rejected_signals, generate_strategy_comparison, generate_strategy_stats,
generate_tag_metrics, generate_trade_signal_candles, generate_trading_stats,
generate_wins_draws_losses)
generate_tag_metrics, generate_trade_signal_candles, generate_trading_stats)

View File

@ -5,8 +5,8 @@ from tabulate import tabulate
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT, Config
from freqtrade.misc import decimals_per_coin, round_coin_value
from freqtrade.optimize.optimize_reports.optimize_reports import (generate_periodic_breakdown_stats,
generate_wins_draws_losses)
from freqtrade.optimize.optimize_reports.optimize_reports import generate_periodic_breakdown_stats
from freqtrade.types import BacktestResultType
logger = logging.getLogger(__name__)
@ -30,6 +30,16 @@ def _get_line_header(first_column: str, stake_currency: str,
'Win Draw Loss Win%']
def generate_wins_draws_losses(wins, draws, losses):
if wins > 0 and losses == 0:
wl_ratio = '100'
elif wins == 0:
wl_ratio = '0'
else:
wl_ratio = f'{100.0 / (wins + draws + losses) * wins:.1f}' if losses > 0 else '100'
return f'{wins:>4} {draws:>4} {losses:>4} {wl_ratio:>4}'
def text_table_bt_results(pair_results: List[Dict[str, Any]], stake_currency: str) -> str:
"""
Generates and returns a text table for the given backtest data and the results dataframe
@ -233,8 +243,9 @@ def text_table_add_metrics(strat_results: Dict) -> str:
('Calmar', f"{strat_results['calmar']:.2f}" if 'calmar' in strat_results else 'N/A'),
('Profit factor', f'{strat_results["profit_factor"]:.2f}' if 'profit_factor'
in strat_results else 'N/A'),
('Expectancy', f"{strat_results['expectancy']:.2f}" if 'expectancy'
in strat_results else 'N/A'),
('Expectancy (Ratio)', (
f"{strat_results['expectancy']:.2f} ({strat_results['expectancy_ratio']:.2f})" if
'expectancy_ratio' in strat_results else 'N/A')),
('Trades per day', strat_results['trades_per_day']),
('Avg. daily profit %',
f"{(strat_results['profit_total'] / strat_results['backtest_days']):.2%}"),
@ -260,6 +271,9 @@ def text_table_add_metrics(strat_results: Dict) -> str:
f"{strat_results['draw_days']} / {strat_results['losing_days']}"),
('Avg. Duration Winners', f"{strat_results['winner_holding_avg']}"),
('Avg. Duration Loser', f"{strat_results['loser_holding_avg']}"),
('Max Consecutive Wins / Loss',
f"{strat_results['max_consecutive_wins']} / {strat_results['max_consecutive_losses']}"
if 'max_consecutive_losses' in strat_results else 'N/A'),
('Rejected Entry signals', strat_results.get('rejected_signals', 'N/A')),
('Entry/Exit Timeouts',
f"{strat_results.get('timedout_entry_orders', 'N/A')} / "
@ -350,7 +364,7 @@ def show_backtest_result(strategy: str, results: Dict[str, Any], stake_currency:
print()
def show_backtest_results(config: Config, backtest_stats: Dict):
def show_backtest_results(config: Config, backtest_stats: BacktestResultType):
stake_currency = config['stake_currency']
for strategy, results in backtest_stats['strategy'].items():
@ -370,7 +384,7 @@ def show_backtest_results(config: Config, backtest_stats: Dict):
print('\nFor more details, please look at the detail tables above')
def show_sorted_pairlist(config: Config, backtest_stats: Dict):
def show_sorted_pairlist(config: Config, backtest_stats: BacktestResultType):
if config.get('backtest_show_pair_list', False):
for strategy, results in backtest_stats['strategy'].items():
print(f"Pairs for Strategy {strategy}: \n[")

View File

@ -2,18 +2,17 @@ import logging
from pathlib import Path
from typing import Dict
from pandas import DataFrame
from freqtrade.constants import LAST_BT_RESULT_FN
from freqtrade.misc import file_dump_joblib, file_dump_json
from freqtrade.optimize.backtest_caching import get_backtest_metadata_filename
from freqtrade.types import BacktestResultType
logger = logging.getLogger(__name__)
def store_backtest_stats(
recordfilename: Path, stats: Dict[str, DataFrame], dtappendix: str) -> None:
recordfilename: Path, stats: BacktestResultType, dtappendix: str) -> Path:
"""
Stores backtest results
:param recordfilename: Path object, which can either be a filename or a directory.
@ -31,13 +30,19 @@ def store_backtest_stats(
# Store metadata separately.
file_dump_json(get_backtest_metadata_filename(filename), stats['metadata'])
del stats['metadata']
# Don't mutate the original stats dict.
stats_copy = {
'strategy': stats['strategy'],
'strategy_comparison': stats['strategy_comparison'],
}
file_dump_json(filename, stats)
file_dump_json(filename, stats_copy)
latest_filename = Path.joinpath(filename.parent, LAST_BT_RESULT_FN)
file_dump_json(latest_filename, {'latest_backtest': str(filename.name)})
return filename
def _store_backtest_analysis_data(
recordfilename: Path, data: Dict[str, Dict],

View File

@ -1,15 +1,17 @@
import logging
from copy import deepcopy
from datetime import datetime, timedelta, timezone
from typing import Any, Dict, List, Union
from typing import Any, Dict, List, Tuple, Union
from pandas import DataFrame, concat, to_datetime
import numpy as np
from pandas import DataFrame, Series, concat, to_datetime
from freqtrade.constants import BACKTEST_BREAKDOWNS, DATETIME_PRINT_FORMAT, IntOrInf
from freqtrade.data.metrics import (calculate_cagr, calculate_calmar, calculate_csum,
calculate_expectancy, calculate_market_change,
calculate_max_drawdown, calculate_sharpe, calculate_sortino)
from freqtrade.misc import decimals_per_coin, round_coin_value
from freqtrade.types import BacktestResultType
logger = logging.getLogger(__name__)
@ -57,16 +59,6 @@ def generate_rejected_signals(preprocessed_df: Dict[str, DataFrame],
return rejected_candles_only
def generate_wins_draws_losses(wins, draws, losses):
if wins > 0 and losses == 0:
wl_ratio = '100'
elif wins == 0:
wl_ratio = '0'
else:
wl_ratio = f'{100.0 / (wins + draws + losses) * wins:.1f}' if losses > 0 else '100'
return f'{wins:>4} {draws:>4} {losses:>4} {wl_ratio:>4}'
def _generate_result_line(result: DataFrame, starting_balance: int, first_column: str) -> Dict:
"""
Generate one result dict, with "first_column" as key.
@ -97,6 +89,7 @@ def _generate_result_line(result: DataFrame, starting_balance: int, first_column
'wins': len(result[result['profit_abs'] > 0]),
'draws': len(result[result['profit_abs'] == 0]),
'losses': len(result[result['profit_abs'] < 0]),
'winrate': len(result[result['profit_abs'] > 0]) / len(result) if len(result) else 0.0,
}
@ -184,6 +177,7 @@ def generate_exit_reason_stats(max_open_trades: IntOrInf, results: DataFrame) ->
'wins': len(result[result['profit_abs'] > 0]),
'draws': len(result[result['profit_abs'] == 0]),
'losses': len(result[result['profit_abs'] < 0]),
'winrate': len(result[result['profit_abs'] > 0]) / count if count else 0.0,
'profit_mean': profit_mean,
'profit_mean_pct': round(profit_mean * 100, 2),
'profit_sum': profit_sum,
@ -238,6 +232,7 @@ def generate_periodic_breakdown_stats(trade_list: List, period: str) -> List[Dic
wins = sum(day['profit_abs'] > 0)
draws = sum(day['profit_abs'] == 0)
loses = sum(day['profit_abs'] < 0)
trades = (wins + draws + loses)
stats.append(
{
'date': name.strftime('%d/%m/%Y'),
@ -245,7 +240,8 @@ def generate_periodic_breakdown_stats(trade_list: List, period: str) -> List[Dic
'profit_abs': profit_abs,
'wins': wins,
'draws': draws,
'loses': loses
'loses': loses,
'winrate': wins / trades if trades else 0.0,
}
)
return stats
@ -258,6 +254,23 @@ def generate_all_periodic_breakdown_stats(trade_list: List) -> Dict[str, List]:
return result
def calc_streak(dataframe: DataFrame) -> Tuple[int, int]:
"""
Calculate consecutive win and loss streaks
:param dataframe: Dataframe containing the trades dataframe, with profit_ratio column
:return: Tuple containing consecutive wins and losses
"""
df = Series(np.where(dataframe['profit_ratio'] > 0, 'win', 'loss')).to_frame('result')
df['streaks'] = df['result'].ne(df['result'].shift()).cumsum().rename('streaks')
df['counter'] = df['streaks'].groupby(df['streaks']).cumcount() + 1
res = df.groupby(df['result']).max()
#
cons_wins = int(res.loc['win', 'counter']) if 'win' in res.index else 0
cons_losses = int(res.loc['loss', 'counter']) if 'loss' in res.index else 0
return cons_wins, cons_losses
def generate_trading_stats(results: DataFrame) -> Dict[str, Any]:
""" Generate overall trade statistics """
if len(results) == 0:
@ -265,9 +278,12 @@ def generate_trading_stats(results: DataFrame) -> Dict[str, Any]:
'wins': 0,
'losses': 0,
'draws': 0,
'winrate': 0,
'holding_avg': timedelta(),
'winner_holding_avg': timedelta(),
'loser_holding_avg': timedelta(),
'max_consecutive_wins': 0,
'max_consecutive_losses': 0,
}
winning_trades = results.loc[results['profit_ratio'] > 0]
@ -280,17 +296,21 @@ def generate_trading_stats(results: DataFrame) -> Dict[str, Any]:
if not winning_trades.empty else timedelta())
loser_holding_avg = (timedelta(minutes=round(losing_trades['trade_duration'].mean()))
if not losing_trades.empty else timedelta())
winstreak, loss_streak = calc_streak(results)
return {
'wins': len(winning_trades),
'losses': len(losing_trades),
'draws': len(draw_trades),
'winrate': len(winning_trades) / len(results) if len(results) else 0.0,
'holding_avg': holding_avg,
'holding_avg_s': holding_avg.total_seconds(),
'winner_holding_avg': winner_holding_avg,
'winner_holding_avg_s': winner_holding_avg.total_seconds(),
'loser_holding_avg': loser_holding_avg,
'loser_holding_avg_s': loser_holding_avg.total_seconds(),
'max_consecutive_wins': winstreak,
'max_consecutive_losses': loss_streak,
}
@ -383,6 +403,7 @@ def generate_strategy_stats(pairlist: List[str],
losing_profit = results.loc[results['profit_abs'] < 0, 'profit_abs'].sum()
profit_factor = winning_profit / abs(losing_profit) if losing_profit else 0.0
expectancy, expectancy_ratio = calculate_expectancy(results)
backtest_days = (max_date - min_date).days or 1
strat_stats = {
'trades': results.to_dict(orient='records'),
@ -408,7 +429,8 @@ def generate_strategy_stats(pairlist: List[str],
'profit_total_long_abs': results.loc[~results['is_short'], 'profit_abs'].sum(),
'profit_total_short_abs': results.loc[results['is_short'], 'profit_abs'].sum(),
'cagr': calculate_cagr(backtest_days, start_balance, content['final_balance']),
'expectancy': calculate_expectancy(results),
'expectancy': expectancy,
'expectancy_ratio': expectancy_ratio,
'sortino': calculate_sortino(results, min_date, max_date, start_balance),
'sharpe': calculate_sharpe(results, min_date, max_date, start_balance),
'calmar': calculate_calmar(results, min_date, max_date, start_balance),
@ -514,7 +536,7 @@ def generate_strategy_stats(pairlist: List[str],
def generate_backtest_stats(btdata: Dict[str, DataFrame],
all_results: Dict[str, Dict[str, Union[DataFrame, Dict]]],
min_date: datetime, max_date: datetime
) -> Dict[str, Any]:
) -> BacktestResultType:
"""
:param btdata: Backtest data
:param all_results: backtest result - dictionary in the form:
@ -523,7 +545,7 @@ def generate_backtest_stats(btdata: Dict[str, DataFrame],
:param max_date: Backtest end date
:return: Dictionary containing results per strategy and a strategy summary.
"""
result: Dict[str, Any] = {
result: BacktestResultType = {
'metadata': {},
'strategy': {},
'strategy_comparison': [],

View File

@ -38,6 +38,7 @@ class Order(ModelBase):
Mirrors CCXT Order structure
"""
__tablename__ = 'orders'
__allow_unmapped__ = True
session: ClassVar[SessionType]
# Uniqueness should be ensured over pair, order_id
@ -47,7 +48,8 @@ class Order(ModelBase):
id: Mapped[int] = mapped_column(Integer, primary_key=True)
ft_trade_id: Mapped[int] = mapped_column(Integer, ForeignKey('trades.id'), index=True)
trade: Mapped["Trade"] = relationship("Trade", back_populates="orders")
_trade_live: Mapped["Trade"] = relationship("Trade", back_populates="orders")
_trade_bt: "LocalTrade" = None # type: ignore
# order_side can only be 'buy', 'sell' or 'stoploss'
ft_order_side: Mapped[str] = mapped_column(String(25), nullable=False)
@ -119,6 +121,15 @@ class Order(ModelBase):
def safe_amount_after_fee(self) -> float:
return self.safe_filled - self.safe_fee_base
@property
def trade(self) -> "LocalTrade":
return self._trade_bt or self._trade_live
@property
def stake_amount(self) -> float:
""" Amount in stake currency used for this order"""
return self.safe_amount * self.safe_price / self.trade.leverage
def __repr__(self):
return (f"Order(id={self.id}, trade={self.ft_trade_id}, order_id={self.order_id}, "
@ -1299,9 +1310,12 @@ class Trade(ModelBase, LocalTrade):
Float(), nullable=True, default=None) # type: ignore
def __init__(self, **kwargs):
from_json = kwargs.pop('__FROM_JSON', None)
super().__init__(**kwargs)
self.realized_profit = 0
self.recalc_open_trade_value()
if not from_json:
# Skip recalculation when loading from json
self.realized_profit = 0
self.recalc_open_trade_value()
@validates('enter_tag', 'exit_reason')
def validate_string_len(self, key, value):
@ -1655,6 +1669,7 @@ class Trade(ModelBase, LocalTrade):
import rapidjson
data = rapidjson.loads(json_str)
trade = cls(
__FROM_JSON=True,
id=data["trade_id"],
pair=data["pair"],
base_currency=data["base_currency"],

View File

@ -55,7 +55,7 @@ def init_plotscript(config, markets: List, startup_candles: int = 0):
timeframe=config['timeframe'],
timerange=timerange,
startup_candles=startup_candles,
data_format=config.get('dataformat_ohlcv', 'json'),
data_format=config['dataformat_ohlcv'],
candle_type=config.get('candle_type_def', CandleType.SPOT)
)
@ -84,7 +84,7 @@ def init_plotscript(config, markets: List, startup_candles: int = 0):
except ValueError as e:
raise OperationalException(e) from e
if not trades.empty:
trades = trim_dataframe(trades, timerange, 'open_date')
trades = trim_dataframe(trades, timerange, df_date_col='open_date')
return {"ohlcv": data,
"trades": trades,

View File

@ -3,15 +3,16 @@ Remote PairList provider
Provides pair list fetched from a remote source
"""
import json
import logging
from pathlib import Path
from typing import Any, Dict, List, Tuple
import rapidjson
import requests
from cachetools import TTLCache
from freqtrade import __version__
from freqtrade.configuration.load_config import CONFIG_PARSE_MODE
from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException
from freqtrade.exchange.types import Tickers
@ -236,7 +237,7 @@ class RemotePairList(IPairList):
if file_path.exists():
with file_path.open() as json_file:
# Load the JSON data into a dictionary
jsonparse = json.load(json_file)
jsonparse = rapidjson.load(json_file, parse_mode=CONFIG_PARSE_MODE)
try:
pairlist = self.process_json(jsonparse)

View File

@ -2,6 +2,7 @@ import asyncio
import logging
from copy import deepcopy
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List
from fastapi import APIRouter, BackgroundTasks, Depends
@ -9,16 +10,19 @@ from fastapi.exceptions import HTTPException
from freqtrade.configuration.config_validation import validate_config_consistency
from freqtrade.constants import Config
from freqtrade.data.btanalysis import get_backtest_resultlist, load_and_merge_backtest_result
from freqtrade.data.btanalysis import (delete_backtest_result, get_backtest_result,
get_backtest_resultlist, load_and_merge_backtest_result,
update_backtest_metadata)
from freqtrade.enums import BacktestState
from freqtrade.exceptions import DependencyException, OperationalException
from freqtrade.exchange.common import remove_exchange_credentials
from freqtrade.misc import deep_merge_dicts
from freqtrade.rpc.api_server.api_schemas import (BacktestHistoryEntry, BacktestRequest,
BacktestResponse)
from freqtrade.misc import deep_merge_dicts, is_file_in_dir
from freqtrade.rpc.api_server.api_schemas import (BacktestHistoryEntry, BacktestMetadataUpdate,
BacktestRequest, BacktestResponse)
from freqtrade.rpc.api_server.deps import get_config
from freqtrade.rpc.api_server.webserver_bgwork import ApiBG
from freqtrade.rpc.rpc import RPCException
from freqtrade.types import get_BacktestResultType_default
logger = logging.getLogger(__name__)
@ -67,14 +71,15 @@ def __run_backtest_bg(btconfig: Config):
ApiBG.bt['bt'].enable_protections = btconfig.get('enable_protections', False)
ApiBG.bt['bt'].strategylist = [strat]
ApiBG.bt['bt'].results = {}
ApiBG.bt['bt'].results = get_BacktestResultType_default()
ApiBG.bt['bt'].load_prior_backtest()
ApiBG.bt['bt'].abort = False
strategy_name = strat.get_strategy_name()
if (ApiBG.bt['bt'].results and
strat.get_strategy_name() in ApiBG.bt['bt'].results['strategy']):
strategy_name in ApiBG.bt['bt'].results['strategy']):
# When previous result hash matches - reuse that result and skip backtesting.
logger.info(f'Reusing result of previous backtest for {strat.get_strategy_name()}')
logger.info(f'Reusing result of previous backtest for {strategy_name}')
else:
min_date, max_date = ApiBG.bt['bt'].backtest_one_strategy(
strat, ApiBG.bt['data'], ApiBG.bt['timerange'])
@ -84,10 +89,12 @@ def __run_backtest_bg(btconfig: Config):
min_date=min_date, max_date=max_date)
if btconfig.get('export', 'none') == 'trades':
store_backtest_stats(
fn = store_backtest_stats(
btconfig['exportfilename'], ApiBG.bt['bt'].results,
datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
)
ApiBG.bt['bt'].results['metadata'][strategy_name]['filename'] = str(fn.name)
ApiBG.bt['bt'].results['metadata'][strategy_name]['strategy'] = strategy_name
logger.info("Backtest finished.")
@ -245,13 +252,16 @@ def api_backtest_history(config=Depends(get_config)):
tags=['webserver', 'backtest'])
def api_backtest_history_result(filename: str, strategy: str, config=Depends(get_config)):
# Get backtest result history, read from metadata files
fn = config['user_data_dir'] / 'backtest_results' / filename
bt_results_base: Path = config['user_data_dir'] / 'backtest_results'
fn = (bt_results_base / filename).with_suffix('.json')
results: Dict[str, Any] = {
'metadata': {},
'strategy': {},
'strategy_comparison': [],
}
if not is_file_in_dir(fn, bt_results_base):
raise HTTPException(status_code=404, detail="File not found.")
load_and_merge_backtest_result(strategy, fn, results)
return {
"status": "ended",
@ -261,3 +271,38 @@ def api_backtest_history_result(filename: str, strategy: str, config=Depends(get
"status_msg": "Historic result",
"backtest_result": results,
}
@router.delete('/backtest/history/{file}', response_model=List[BacktestHistoryEntry],
tags=['webserver', 'backtest'])
def api_delete_backtest_history_entry(file: str, config=Depends(get_config)):
# Get backtest result history, read from metadata files
bt_results_base: Path = config['user_data_dir'] / 'backtest_results'
file_abs = (bt_results_base / file).with_suffix('.json')
# Ensure file is in backtest_results directory
if not is_file_in_dir(file_abs, bt_results_base):
raise HTTPException(status_code=404, detail="File not found.")
delete_backtest_result(file_abs)
return get_backtest_resultlist(config['user_data_dir'] / 'backtest_results')
@router.patch('/backtest/history/{file}', response_model=List[BacktestHistoryEntry],
tags=['webserver', 'backtest'])
def api_update_backtest_history_entry(file: str, body: BacktestMetadataUpdate,
config=Depends(get_config)):
# Get backtest result history, read from metadata files
bt_results_base: Path = config['user_data_dir'] / 'backtest_results'
file_abs = (bt_results_base / file).with_suffix('.json')
# Ensure file is in backtest_results directory
if not is_file_in_dir(file_abs, bt_results_base):
raise HTTPException(status_code=404, detail="File not found.")
content = {
'notes': body.notes
}
try:
update_backtest_metadata(file_abs, body.strategy, content)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return get_backtest_result(file_abs)

View File

@ -136,6 +136,9 @@ class Profit(BaseModel):
winning_trades: int
losing_trades: int
profit_factor: float
winrate: float
expectancy: float
expectancy_ratio: float
max_drawdown: float
max_drawdown_abs: float
trading_volume: Optional[float]
@ -517,11 +520,18 @@ class BacktestResponse(BaseModel):
backtest_result: Optional[Dict[str, Any]]
# TODO: This is a copy of BacktestHistoryEntryType
class BacktestHistoryEntry(BaseModel):
filename: str
strategy: str
run_id: str
backtest_start_time: int
notes: Optional[str] = ''
class BacktestMetadataUpdate(BaseModel):
strategy: str
notes: str = ''
class SysInfo(BaseModel):

View File

@ -49,7 +49,9 @@ logger = logging.getLogger(__name__)
# 2.28: Switch reload endpoint to Post
# 2.29: Add /exchanges endpoint
# 2.30: new /pairlists endpoint
API_VERSION = 2.30
# 2.31: new /backtest/history/ delete endpoint
# 2.32: new /backtest/history/ patch endpoint
API_VERSION = 2.32
# Public API, requires no auth.
router_public = APIRouter()
@ -268,7 +270,10 @@ def pair_history(pair: str, timeframe: str, timerange: str, strategy: str,
'timerange': timerange,
'freqaimodel': freqaimodel if freqaimodel else config.get('freqaimodel'),
})
return RPC._rpc_analysed_history_full(config, pair, timeframe, exchange)
try:
return RPC._rpc_analysed_history_full(config, pair, timeframe, exchange)
except Exception as e:
raise HTTPException(status_code=502, detail=str(e))
@router.get('/plot_config', response_model=PlotConfig, tags=['candle data'])
@ -283,7 +288,10 @@ def plot_config(strategy: Optional[str] = None, config=Depends(get_config),
config1.update({
'strategy': strategy
})
try:
return PlotConfig.parse_obj(RPC._rpc_plot_config_with_strategy(config1))
except Exception as e:
raise HTTPException(status_code=502, detail=str(e))
@router.get('/strategies', response_model=StrategyListResponse, tags=['strategy'])
@ -308,7 +316,8 @@ def get_strategy(strategy: str, config=Depends(get_config)):
extra_dir=config_.get('strategy_path'))
except OperationalException:
raise HTTPException(status_code=404, detail='Strategy not found')
except Exception as e:
raise HTTPException(status_code=502, detail=str(e))
return {
'strategy': strategy_obj.get_strategy_name(),
'code': strategy_obj.__source__,

View File

@ -30,7 +30,7 @@ async def ui_version():
}
def is_relative_to(path, base) -> bool:
def is_relative_to(path: Path, base: Path) -> bool:
# Helper function simulating behaviour of is_relative_to, which was only added in python 3.9
try:
path.relative_to(base)

View File

@ -8,6 +8,7 @@ from fastapi import Depends, FastAPI
from fastapi.middleware.cors import CORSMiddleware
from starlette.responses import JSONResponse
from freqtrade.configuration import running_in_docker
from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException
from freqtrade.rpc.api_server.uvicorn_threaded import UvicornServer
@ -182,7 +183,7 @@ class ApiServer(RPCHandler):
rest_port = self._config['api_server']['listen_port']
logger.info(f'Starting HTTP Server at {rest_ip}:{rest_port}')
if not IPv4Address(rest_ip).is_loopback:
if not IPv4Address(rest_ip).is_loopback and not running_in_docker():
logger.warning("SECURITY WARNING - Local Rest Server listening to external connections")
logger.warning("SECURITY WARNING - This is insecure please set to your loopback,"
"e.g 127.0.0.1 in config.json")

View File

@ -25,6 +25,7 @@ coingecko_mapping = {
'bnb': 'binancecoin',
'sol': 'solana',
'usdt': 'tether',
'busd': 'binance-usd',
}

View File

@ -18,7 +18,7 @@ from freqtrade import __version__
from freqtrade.configuration.timerange import TimeRange
from freqtrade.constants import CANCEL_REASON, DATETIME_PRINT_FORMAT, Config
from freqtrade.data.history import load_data
from freqtrade.data.metrics import calculate_max_drawdown
from freqtrade.data.metrics import calculate_expectancy, calculate_max_drawdown
from freqtrade.enums import (CandleType, ExitCheckTuple, ExitType, MarketDirection, SignalDirection,
State, TradingMode)
from freqtrade.exceptions import ExchangeError, PricingError
@ -494,6 +494,8 @@ class RPC:
profit_all_coin.append(profit_abs)
profit_all_ratio.append(profit_ratio)
closed_trade_count = len([t for t in trades if not t.is_open])
best_pair = Trade.get_best_pair(start_date)
trading_volume = Trade.get_trading_volume(start_date)
@ -521,9 +523,14 @@ class RPC:
profit_factor = winning_profit / abs(losing_profit) if losing_profit else float('inf')
winrate = (winning_trades / closed_trade_count) if closed_trade_count > 0 else 0
trades_df = DataFrame([{'close_date': trade.close_date.strftime(DATETIME_PRINT_FORMAT),
'profit_abs': trade.close_profit_abs}
for trade in trades if not trade.is_open and trade.close_date])
expectancy, expectancy_ratio = calculate_expectancy(trades_df)
max_drawdown_abs = 0.0
max_drawdown = 0.0
if len(trades_df) > 0:
@ -562,7 +569,7 @@ class RPC:
'profit_all_percent': round(profit_all_ratio_fromstart * 100, 2),
'profit_all_fiat': profit_all_fiat,
'trade_count': len(trades),
'closed_trade_count': len([t for t in trades if not t.is_open]),
'closed_trade_count': closed_trade_count,
'first_trade_date': first_date.strftime(DATETIME_PRINT_FORMAT) if first_date else '',
'first_trade_humanized': dt_humanize(first_date) if first_date else '',
'first_trade_timestamp': int(first_date.timestamp() * 1000) if first_date else 0,
@ -576,6 +583,9 @@ class RPC:
'winning_trades': winning_trades,
'losing_trades': losing_trades,
'profit_factor': profit_factor,
'winrate': winrate,
'expectancy': expectancy,
'expectancy_ratio': expectancy_ratio,
'max_drawdown': max_drawdown,
'max_drawdown_abs': max_drawdown_abs,
'trading_volume': trading_volume,
@ -1169,8 +1179,8 @@ class RPC:
""" Analyzed dataframe in Dict form """
_data, last_analyzed = self.__rpc_analysed_dataframe_raw(pair, timeframe, limit)
return self._convert_dataframe_to_dict(self._freqtrade.config['strategy'],
pair, timeframe, _data, last_analyzed)
return RPC._convert_dataframe_to_dict(self._freqtrade.config['strategy'],
pair, timeframe, _data, last_analyzed)
def __rpc_analysed_dataframe_raw(
self,
@ -1240,27 +1250,34 @@ class RPC:
exchange) -> Dict[str, Any]:
timerange_parsed = TimeRange.parse_timerange(config.get('timerange'))
from freqtrade.data.converter import trim_dataframe
from freqtrade.data.dataprovider import DataProvider
from freqtrade.resolvers.strategy_resolver import StrategyResolver
strategy = StrategyResolver.load_strategy(config)
startup_candles = strategy.startup_candle_count
_data = load_data(
datadir=config["datadir"],
pairs=[pair],
timeframe=timeframe,
timerange=timerange_parsed,
data_format=config.get('dataformat_ohlcv', 'json'),
candle_type=config.get('candle_type_def', CandleType.SPOT)
data_format=config['dataformat_ohlcv'],
candle_type=config.get('candle_type_def', CandleType.SPOT),
startup_candles=startup_candles,
)
if pair not in _data:
raise RPCException(
f"No data for {pair}, {timeframe} in {config.get('timerange')} found.")
from freqtrade.data.dataprovider import DataProvider
from freqtrade.resolvers.strategy_resolver import StrategyResolver
strategy = StrategyResolver.load_strategy(config)
strategy.dp = DataProvider(config, exchange=exchange, pairlists=None)
strategy.ft_bot_start()
df_analyzed = strategy.analyze_ticker(_data[pair], {'pair': pair})
df_analyzed = trim_dataframe(df_analyzed, timerange_parsed, startup_candles=startup_candles)
return RPC._convert_dataframe_to_dict(strategy.get_strategy_name(), pair, timeframe,
df_analyzed, dt_now())
df_analyzed.copy(), dt_now())
def _rpc_plot_config(self) -> Dict[str, Any]:
if (self._freqtrade.strategy.plot_config and

View File

@ -849,6 +849,10 @@ class Telegram(RPCHandler):
avg_duration = stats['avg_duration']
best_pair = stats['best_pair']
best_pair_profit_ratio = stats['best_pair_profit_ratio']
winrate = stats['winrate']
expectancy = stats['expectancy']
expectancy_ratio = stats['expectancy_ratio']
if stats['trade_count'] == 0:
markdown_msg = f"No trades yet.\n*Bot started:* `{stats['bot_start_date']}`"
else:
@ -873,7 +877,9 @@ class Telegram(RPCHandler):
f"*{'First Trade opened' if not timescale else 'Showing Profit since'}:* "
f"`{first_trade_date}`\n"
f"*Latest Trade opened:* `{latest_trade_date}`\n"
f"*Win / Loss:* `{stats['winning_trades']} / {stats['losing_trades']}`"
f"*Win / Loss:* `{stats['winning_trades']} / {stats['losing_trades']}`\n"
f"*Winrate:* `{winrate:.2%}`\n"
f"*Expectancy (Ratio):* `{expectancy:.2f} ({expectancy_ratio:.2f})`"
)
if stats['closed_trade_count'] > 0:
markdown_msg += (

View File

@ -34,6 +34,7 @@ class Webhook(RPCHandler):
self._format = self._config['webhook'].get('format', 'form')
self._retries = self._config['webhook'].get('retries', 0)
self._retry_delay = self._config['webhook'].get('retry_delay', 0.1)
self._timeout = self._config['webhook'].get('timeout', 10)
def cleanup(self) -> None:
"""
@ -107,12 +108,13 @@ class Webhook(RPCHandler):
try:
if self._format == 'form':
response = post(self._url, data=payload)
response = post(self._url, data=payload, timeout=self._timeout)
elif self._format == 'json':
response = post(self._url, json=payload)
response = post(self._url, json=payload, timeout=self._timeout)
elif self._format == 'raw':
response = post(self._url, data=payload['data'],
headers={'Content-Type': 'text/plain'})
headers={'Content-Type': 'text/plain'},
timeout=self._timeout)
else:
raise NotImplementedError(f'Unknown format: {self._format}')

View File

@ -825,6 +825,7 @@ class IStrategy(ABC, HyperStrategyMixin):
"""
Parses the given candle (OHLCV) data and returns a populated DataFrame
add several TA indicators and entry order signal to it
Should only be used in live.
:param dataframe: Dataframe containing data from exchange
:param metadata: Metadata dictionary with additional data (e.g. 'pair')
:return: DataFrame of candle (OHLCV) data with indicator data and signals added
@ -1321,6 +1322,20 @@ class IStrategy(ABC, HyperStrategyMixin):
return {pair: self.advise_indicators(pair_data.copy(), {'pair': pair}).copy()
for pair, pair_data in data.items()}
def ft_advise_signals(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Call advise_entry and advise_exit and return the resulting dataframe.
:param dataframe: Dataframe containing data from exchange, as well as pre-calculated
indicators
:param metadata: Metadata dictionary with additional data (e.g. 'pair')
:return: DataFrame of candle (OHLCV) data with indicator data and signals added
"""
dataframe = self.advise_entry(dataframe, metadata)
dataframe = self.advise_exit(dataframe, metadata)
return dataframe
def advise_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
"""
Populate indicators that will be used in the Buy, Sell, short, exit_short strategy

View File

@ -1 +1,5 @@
from freqtrade.types.valid_exchanges_type import ValidExchangesType # noqa: F401
# flake8: noqa: F401
from freqtrade.types.backtest_result_type import (BacktestHistoryEntryType, BacktestMetadataType,
BacktestResultType,
get_BacktestResultType_default)
from freqtrade.types.valid_exchanges_type import ValidExchangesType

View File

@ -0,0 +1,28 @@
from typing import Any, Dict, List
from typing_extensions import TypedDict
class BacktestMetadataType(TypedDict):
run_id: str
backtest_start_time: int
class BacktestResultType(TypedDict):
metadata: Dict[str, Any] # BacktestMetadataType
strategy: Dict[str, Any]
strategy_comparison: List[Any]
def get_BacktestResultType_default() -> BacktestResultType:
return {
'metadata': {},
'strategy': {},
'strategy_comparison': [],
}
class BacktestHistoryEntryType(BacktestMetadataType):
filename: str
strategy: str
notes: str

View File

@ -64,7 +64,7 @@ def migrate_binance_futures_data(config: Config):
return
from freqtrade.data.history.idatahandler import get_datahandler
dhc = get_datahandler(config['datadir'], config.get('dataformat_ohlcv', 'json'))
dhc = get_datahandler(config['datadir'], config['dataformat_ohlcv'])
paircombs = dhc.ohlcv_get_available_data(
config['datadir'],

View File

@ -84,6 +84,7 @@ class Wallets:
tot_profit = Trade.get_total_closed_profit()
else:
tot_profit = LocalTrade.total_profit
tot_profit += sum(trade.realized_profit for trade in open_trades)
tot_in_trades = sum(trade.stake_amount for trade in open_trades)
used_stake = 0.0

View File

@ -7,11 +7,11 @@
-r docs/requirements-docs.txt
coveralls==3.3.1
ruff==0.0.277
ruff==0.0.282
mypy==1.4.1
pre-commit==3.3.3
pytest==7.4.0
pytest-asyncio==0.21.0
pytest-asyncio==0.21.1
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-random-order==1.1.0
@ -20,11 +20,11 @@ isort==5.12.0
time-machine==2.11.0
# Convert jupyter notebooks to markdown documents
nbconvert==7.6.0
nbconvert==7.7.3
# mypy types
types-cachetools==5.3.0.5
types-cachetools==5.3.0.6
types-filelock==3.2.7
types-requests==2.31.0.1
types-tabulate==0.9.0.2
types-python-dateutil==2.8.19.13
types-requests==2.31.0.2
types-tabulate==0.9.0.3
types-python-dateutil==2.8.19.14

View File

@ -6,7 +6,7 @@
scikit-learn==1.1.3
joblib==1.3.1
catboost==1.2; 'arm' not in platform_machine
lightgbm==3.3.5
lightgbm==4.0.0
xgboost==1.7.6
tensorboard==2.13.0
datasieve==0.1.7

View File

@ -1,22 +1,22 @@
numpy==1.25.1; python_version > '3.8'
numpy==1.25.2; python_version > '3.8'
numpy==1.24.3; python_version <= '3.8'
pandas==2.0.3
pandas-ta==0.3.14b
ccxt==4.0.17
cryptography==41.0.1; platform_machine != 'armv7l'
ccxt==4.0.48
cryptography==41.0.3; platform_machine != 'armv7l'
cryptography==40.0.1; platform_machine == 'armv7l'
aiohttp==3.8.4
SQLAlchemy==2.0.18
aiohttp==3.8.5
SQLAlchemy==2.0.19
python-telegram-bot==20.4
# can't be hard-pinned due to telegram-bot pinning httpx with ~
httpx>=0.24.1
arrow==1.2.3
cachetools==5.3.1
requests==2.31.0
urllib3==2.0.3
jsonschema==4.18.0
TA-Lib==0.4.26
urllib3==2.0.4
jsonschema==4.18.5
TA-Lib==0.4.27
technical==1.4.0
tabulate==0.9.0
pycoingecko==3.1.0
@ -24,8 +24,8 @@ jinja2==3.1.2
tables==3.8.0
blosc==1.11.1
joblib==1.3.1
rich==13.4.2
pyarrow==12.0.0; platform_machine != 'armv7l'
rich==13.5.2
pyarrow==12.0.1; platform_machine != 'armv7l'
# find first, C search in arrays
py_find_1st==1.1.5
@ -39,10 +39,10 @@ orjson==3.9.2
sdnotify==0.3.2
# API Server
fastapi==0.100.0
pydantic==1.10.9
uvicorn==0.22.0
pyjwt==2.7.0
fastapi==0.100.1
pydantic==1.10.11
uvicorn==0.23.2
pyjwt==2.8.0
aiofiles==23.1.0
psutil==5.9.5

View File

@ -97,7 +97,7 @@ setup(
'rich',
'pyarrow; platform_machine != "armv7l"',
'fastapi',
'pydantic>=1.8.0',
'pydantic>=1.8.0,<2.0',
'uvicorn',
'psutil',
'pyjwt',

View File

@ -1389,8 +1389,6 @@ def test_convert_data_trades(mocker, testdatadir):
def test_start_list_data(testdatadir, capsys):
args = [
"list-data",
"--data-format-ohlcv",
"json",
"--datadir",
str(testdatadir),
]
@ -1398,14 +1396,14 @@ def test_start_list_data(testdatadir, capsys):
pargs['config'] = None
start_list_data(pargs)
captured = capsys.readouterr()
assert "Found 17 pair / timeframe combinations." in captured.out
assert "Found 16 pair / timeframe combinations." in captured.out
assert "\n| Pair | Timeframe | Type |\n" in captured.out
assert "\n| UNITTEST/BTC | 1m, 5m, 8m, 30m | spot |\n" in captured.out
args = [
"list-data",
"--data-format-ohlcv",
"json",
"feather",
"--pairs", "XRP/ETH",
"--datadir",
str(testdatadir),
@ -1421,8 +1419,6 @@ def test_start_list_data(testdatadir, capsys):
args = [
"list-data",
"--data-format-ohlcv",
"json",
"--trading-mode", "futures",
"--datadir",
str(testdatadir),
@ -1439,8 +1435,6 @@ def test_start_list_data(testdatadir, capsys):
args = [
"list-data",
"--data-format-ohlcv",
"json",
"--pairs", "XRP/ETH",
"--datadir",
str(testdatadir),

View File

@ -526,6 +526,7 @@ def get_default_conf(testdatadir):
"disableparamexport": True,
"internals": {},
"export": "none",
"dataformat_ohlcv": "feather",
"candle_type_def": CandleType.SPOT,
}
return configuration
@ -3001,85 +3002,85 @@ def mark_ohlcv():
def funding_rate_history_hourly():
return [
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000008,
"timestamp": 1630454400000,
"datetime": "2021-09-01T00:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000004,
"timestamp": 1630458000000,
"datetime": "2021-09-01T01:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000012,
"timestamp": 1630461600000,
"datetime": "2021-09-01T02:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000003,
"timestamp": 1630465200000,
"datetime": "2021-09-01T03:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000007,
"timestamp": 1630468800000,
"datetime": "2021-09-01T04:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000003,
"timestamp": 1630472400000,
"datetime": "2021-09-01T05:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000019,
"timestamp": 1630476000000,
"datetime": "2021-09-01T06:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000003,
"timestamp": 1630479600000,
"datetime": "2021-09-01T07:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000003,
"timestamp": 1630483200000,
"datetime": "2021-09-01T08:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0,
"timestamp": 1630486800000,
"datetime": "2021-09-01T09:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000013,
"timestamp": 1630490400000,
"datetime": "2021-09-01T10:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000077,
"timestamp": 1630494000000,
"datetime": "2021-09-01T11:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000072,
"timestamp": 1630497600000,
"datetime": "2021-09-01T12:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": 0.000097,
"timestamp": 1630501200000,
"datetime": "2021-09-01T13:00:00.000Z"
@ -3091,13 +3092,13 @@ def funding_rate_history_hourly():
def funding_rate_history_octohourly():
return [
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000008,
"timestamp": 1630454400000,
"datetime": "2021-09-01T00:00:00.000Z"
},
{
"symbol": "ADA/USDT",
"symbol": "ADA/USDT:USDT",
"fundingRate": -0.000003,
"timestamp": 1630483200000,
"datetime": "2021-09-01T08:00:00.000Z"

View File

@ -343,12 +343,24 @@ def test_calculate_expectancy(testdatadir):
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
expectancy = calculate_expectancy(DataFrame())
expectancy, expectancy_ratio = calculate_expectancy(DataFrame())
assert expectancy == 0.0
assert expectancy_ratio == 100
expectancy = calculate_expectancy(bt_data)
expectancy, expectancy_ratio = calculate_expectancy(bt_data)
assert isinstance(expectancy, float)
assert pytest.approx(expectancy) == 0.07151374226574791
assert isinstance(expectancy_ratio, float)
assert pytest.approx(expectancy) == 5.820687070932315e-06
assert pytest.approx(expectancy_ratio) == 0.07151374226574791
data = {
'profit_abs': [100, 200, 50, -150, 300, -100, 80, -30]
}
df = DataFrame(data)
expectancy, expectancy_ratio = calculate_expectancy(df)
assert pytest.approx(expectancy) == 56.25
assert pytest.approx(expectancy_ratio) == 0.60267857
def test_calculate_sortino(testdatadir):

View File

@ -135,6 +135,73 @@ def test_ohlcv_fill_up_missing_data2(caplog):
f"{len(data)} - after: {len(data2)}.*", caplog)
def test_ohlcv_to_dataframe_1M():
# Monthly ticks from 2019-09-01 to 2023-07-01
ticks = [
[1567296000000, 8042.08, 10475.54, 7700.67, 8041.96, 608742.1109999999],
[1569888000000, 8285.31, 10408.48, 7172.76, 9150.0, 2439561.887],
[1572566400000, 9149.88, 9550.0, 6510.19, 7542.93, 4042674.725],
[1575158400000, 7541.08, 7800.0, 6427.0, 7189.0, 4063882.296],
[1577836800000, 7189.43, 9599.0, 6863.44, 9364.51, 5165281.358],
[1580515200000, 9364.5, 10540.0, 8450.0, 8531.98, 4581788.124],
[1583020800000, 8532.5, 9204.0, 3621.81, 6407.1, 10859497.479],
[1585699200000, 6407.1, 9479.77, 6140.0, 8624.76, 11276526.968],
[1588291200000, 8623.61, 10080.0, 7940.0, 9446.43, 12469561.02],
[1590969600000, 9446.49, 10497.25, 8816.4, 9138.87, 6684044.201],
[1593561600000, 9138.88, 11488.0, 8900.0, 11343.68, 5709327.926],
[1596240000000, 11343.67, 12499.42, 10490.0, 11658.11, 6746487.129],
[1598918400000, 11658.11, 12061.07, 9808.58, 10773.0, 6442697.051],
[1601510400000, 10773.0, 14140.0, 10371.03, 13783.73, 7404103.004],
[1604188800000, 13783.73, 19944.0, 13195.0, 19720.0, 12328272.549],
[1606780800000, 19722.09, 29376.7, 17555.0, 28951.68, 10067314.24],
[1609459200000, 28948.19, 42125.51, 27800.0, 33126.21, 12408873.079],
[1612137600000, 33125.11, 58472.14, 32322.47, 45163.36, 8784474.482],
[1614556800000, 45162.64, 61950.0, 44972.49, 58807.24, 9459821.267],
[1617235200000, 58810.99, 64986.11, 46930.43, 57684.16, 7895051.389],
[1619827200000, 57688.29, 59654.0, 28688.0, 37243.38, 16790964.443],
[1622505600000, 37244.36, 41413.0, 28780.01, 35031.39, 23474519.886],
[1625097600000, 35031.39, 48168.6, 29242.24, 41448.11, 16932491.175],
[1627776000000, 41448.1, 50600.0, 37291.0, 47150.32, 13645800.254],
[1630454400000, 47150.32, 52950.0, 39503.58, 43796.57, 10734742.869],
[1633046400000, 43799.49, 67150.0, 43260.01, 61348.61, 9111112.847],
[1635724800000, 61347.14, 69198.7, 53245.0, 56975.0, 7111424.463],
[1638316800000, 56978.06, 59100.0, 40888.89, 46210.56, 8404449.024],
[1640995200000, 46210.57, 48000.0, 32853.83, 38439.04, 11047479.277],
[1643673600000, 38439.04, 45847.5, 34303.7, 43155.0, 10910339.91],
[1646092800000, 43155.0, 48200.0, 37134.0, 45506.0, 10459721.586],
[1648771200000, 45505.9, 47448.0, 37550.0, 37614.5, 8463568.862],
[1651363200000, 37614.4, 40071.7, 26631.0, 31797.8, 14463715.774],
[1654041600000, 31797.9, 31986.1, 17593.2, 19923.5, 20710810.306],
[1656633600000, 19923.3, 24700.0, 18780.1, 23290.1, 20582518.513],
[1659312000000, 23290.1, 25200.0, 19508.0, 20041.5, 17221921.557],
[1661990400000, 20041.4, 22850.0, 18084.3, 19411.7, 21935261.414],
[1664582400000, 19411.6, 21088.0, 17917.8, 20482.0, 16625843.584],
[1667260800000, 20482.1, 21473.7, 15443.2, 17153.3, 18460614.013],
[1669852800000, 17153.4, 18400.0, 16210.0, 16537.6, 9702408.711],
[1672531200000, 16537.5, 23962.7, 16488.0, 23119.4, 14732180.645],
[1675209600000, 23119.5, 25347.6, 21338.0, 23129.6, 15025197.415],
[1677628800000, 23129.7, 29184.8, 19521.6, 28454.9, 23317458.541],
[1680307200000, 28454.8, 31059.0, 26919.3, 29223.0, 14654208.219],
[1682899200000, 29223.0, 29840.0, 25751.0, 27201.1, 13328157.284],
[1685577600000, 27201.1, 31500.0, 24777.0, 30460.2, 14099299.273],
[1688169600000, 30460.2, 31850.0, 28830.0, 29338.8, 8760361.377]
]
data = ohlcv_to_dataframe(ticks, '1M', pair="UNITTEST/USDT",
fill_missing=False, drop_incomplete=False)
assert len(data) == len(ticks)
assert data.iloc[0]['date'].strftime('%Y-%m-%d') == '2019-09-01'
assert data.iloc[-1]['date'].strftime('%Y-%m-%d') == '2023-07-01'
# Test with filling missing data
data = ohlcv_to_dataframe(ticks, '1M', pair="UNITTEST/USDT",
fill_missing=True, drop_incomplete=False)
assert len(data) == len(ticks)
assert data.iloc[0]['date'].strftime('%Y-%m-%d') == '2019-09-01'
assert data.iloc[-1]['date'].strftime('%Y-%m-%d') == '2023-07-01'
def test_ohlcv_drop_incomplete(caplog):
timeframe = '1d'
ticks = [
@ -304,8 +371,8 @@ def test_convert_ohlcv_format(default_conf, testdatadir, tmpdir, file_base, cand
files_temp = []
files_new = []
for file in file_base:
file_orig = testdatadir / f"{prependix}{file}.json"
file_temp = tmpdir1 / f"{prependix}{file}.json"
file_orig = testdatadir / f"{prependix}{file}.feather"
file_temp = tmpdir1 / f"{prependix}{file}.feather"
file_new = tmpdir1 / f"{prependix}{file}.json.gz"
IDataHandler.create_dir_if_needed(file_temp)
copyfile(file_orig, file_temp)
@ -327,7 +394,7 @@ def test_convert_ohlcv_format(default_conf, testdatadir, tmpdir, file_base, cand
convert_ohlcv_format(
default_conf,
convert_from='json',
convert_from='feather',
convert_to='jsongz',
erase=False,
)
@ -341,7 +408,7 @@ def test_convert_ohlcv_format(default_conf, testdatadir, tmpdir, file_base, cand
convert_ohlcv_format(
default_conf,
convert_from='jsongz',
convert_to='json',
convert_to='feather',
erase=True,
)
for file in (files_temp):

View File

@ -20,7 +20,7 @@ from tests.conftest import log_has, log_has_re
def test_datahandler_ohlcv_get_pairs(testdatadir):
pairs = JsonDataHandler.ohlcv_get_pairs(testdatadir, '5m', candle_type=CandleType.SPOT)
pairs = FeatherDataHandler.ohlcv_get_pairs(testdatadir, '5m', candle_type=CandleType.SPOT)
# Convert to set to avoid failures due to sorting
assert set(pairs) == {'UNITTEST/BTC', 'XLM/BTC', 'ETH/BTC', 'TRX/BTC', 'LTC/BTC',
'XMR/BTC', 'ZEC/BTC', 'ADA/BTC', 'ETC/BTC', 'NXT/BTC',
@ -32,7 +32,7 @@ def test_datahandler_ohlcv_get_pairs(testdatadir):
pairs = HDF5DataHandler.ohlcv_get_pairs(testdatadir, '5m', candle_type=CandleType.SPOT)
assert set(pairs) == {'UNITTEST/BTC'}
pairs = JsonDataHandler.ohlcv_get_pairs(testdatadir, '1h', candle_type=CandleType.MARK)
pairs = FeatherDataHandler.ohlcv_get_pairs(testdatadir, '1h', candle_type=CandleType.MARK)
assert set(pairs) == {'UNITTEST/USDT:USDT', 'XRP/USDT:USDT'}
pairs = JsonGzDataHandler.ohlcv_get_pairs(testdatadir, '1h', candle_type=CandleType.FUTURES)
@ -79,7 +79,7 @@ def test_rebuild_pair_from_filename(input, expected):
def test_datahandler_ohlcv_get_available_data(testdatadir):
paircombs = JsonDataHandler.ohlcv_get_available_data(testdatadir, TradingMode.SPOT)
paircombs = FeatherDataHandler.ohlcv_get_available_data(testdatadir, TradingMode.SPOT)
# Convert to set to avoid failures due to sorting
assert set(paircombs) == {
('UNITTEST/BTC', '5m', CandleType.SPOT),
@ -98,10 +98,9 @@ def test_datahandler_ohlcv_get_available_data(testdatadir):
('XRP/ETH', '5m', CandleType.SPOT),
('UNITTEST/BTC', '30m', CandleType.SPOT),
('UNITTEST/BTC', '8m', CandleType.SPOT),
('NOPAIR/XXX', '4m', CandleType.SPOT),
}
paircombs = JsonDataHandler.ohlcv_get_available_data(testdatadir, TradingMode.FUTURES)
paircombs = FeatherDataHandler.ohlcv_get_available_data(testdatadir, TradingMode.FUTURES)
# Convert to set to avoid failures due to sorting
assert set(paircombs) == {
('UNITTEST/USDT:USDT', '1h', 'mark'),
@ -140,16 +139,11 @@ def test_jsondatahandler_ohlcv_purge(mocker, testdatadir):
def test_jsondatahandler_ohlcv_load(testdatadir, caplog):
dh = JsonDataHandler(testdatadir)
df = dh.ohlcv_load('XRP/ETH', '5m', 'spot')
assert len(df) == 712
df_mark = dh.ohlcv_load('UNITTEST/USDT:USDT', '1h', candle_type="mark")
assert len(df_mark) == 100
df = dh.ohlcv_load('UNITTEST/BTC', '1m', 'spot')
assert len(df) > 0
df_no_mark = dh.ohlcv_load('UNITTEST/USDT', '1h', 'spot')
assert len(df_no_mark) == 0
# Failure case (empty array)
# # Failure case (empty array)
df1 = dh.ohlcv_load('NOPAIR/XXX', '4m', 'spot')
assert len(df1) == 0
assert log_has("Could not load data for NOPAIR/XXX.", caplog)
@ -444,7 +438,7 @@ def test_generic_datahandler_ohlcv_load_and_resave(
tmpdir2 = tmpdir1 / 'futures'
tmpdir2.mkdir()
# Load data from one common file
dhbase = get_datahandler(testdatadir, 'json')
dhbase = get_datahandler(testdatadir, 'feather')
ohlcv = dhbase._ohlcv_load(pair, timeframe, None, candle_type=candle_type)
assert isinstance(ohlcv, DataFrame)
assert len(ohlcv) > 0

View File

@ -63,9 +63,10 @@ def test_historic_ohlcv(mocker, default_conf, ohlcv_history):
def test_historic_ohlcv_dataformat(mocker, default_conf, ohlcv_history):
hdf5loadmock = MagicMock(return_value=ohlcv_history)
jsonloadmock = MagicMock(return_value=ohlcv_history)
featherloadmock = MagicMock(return_value=ohlcv_history)
mocker.patch("freqtrade.data.history.hdf5datahandler.HDF5DataHandler._ohlcv_load", hdf5loadmock)
mocker.patch("freqtrade.data.history.jsondatahandler.JsonDataHandler._ohlcv_load", jsonloadmock)
mocker.patch("freqtrade.data.history.featherdatahandler.FeatherDataHandler._ohlcv_load",
featherloadmock)
default_conf["runmode"] = RunMode.BACKTEST
exchange = get_patched_exchange(mocker, default_conf)
@ -73,17 +74,17 @@ def test_historic_ohlcv_dataformat(mocker, default_conf, ohlcv_history):
data = dp.historic_ohlcv("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame)
hdf5loadmock.assert_not_called()
jsonloadmock.assert_called_once()
featherloadmock.assert_called_once()
# Switching to dataformat hdf5
hdf5loadmock.reset_mock()
jsonloadmock.reset_mock()
featherloadmock.reset_mock()
default_conf["dataformat_ohlcv"] = "hdf5"
dp = DataProvider(default_conf, exchange)
data = dp.historic_ohlcv("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame)
hdf5loadmock.assert_called_once()
jsonloadmock.assert_not_called()
featherloadmock.assert_not_called()
@pytest.mark.parametrize('candle_type', [

View File

@ -68,7 +68,7 @@ def test_load_data_7min_timeframe(caplog, testdatadir) -> None:
def test_load_data_1min_timeframe(ohlcv_history, mocker, caplog, testdatadir) -> None:
mocker.patch(f'{EXMS}.get_historic_ohlcv', return_value=ohlcv_history)
file = testdatadir / 'UNITTEST_BTC-1m.json'
file = testdatadir / 'UNITTEST_BTC-1m.feather'
load_data(datadir=testdatadir, timeframe='1m', pairs=['UNITTEST/BTC'])
assert file.is_file()
assert not log_has(
@ -79,7 +79,7 @@ def test_load_data_1min_timeframe(ohlcv_history, mocker, caplog, testdatadir) ->
def test_load_data_mark(ohlcv_history, mocker, caplog, testdatadir) -> None:
mocker.patch(f'{EXMS}.get_historic_ohlcv', return_value=ohlcv_history)
file = testdatadir / 'futures/UNITTEST_USDT_USDT-1h-mark.json'
file = testdatadir / 'futures/UNITTEST_USDT_USDT-1h-mark.feather'
load_data(datadir=testdatadir, timeframe='1h', pairs=['UNITTEST/BTC'], candle_type='mark')
assert file.is_file()
assert not log_has(
@ -90,7 +90,7 @@ def test_load_data_mark(ohlcv_history, mocker, caplog, testdatadir) -> None:
def test_load_data_startup_candles(mocker, testdatadir) -> None:
ltfmock = mocker.patch(
'freqtrade.data.history.jsondatahandler.JsonDataHandler._ohlcv_load',
'freqtrade.data.history.featherdatahandler.FeatherDataHandler._ohlcv_load',
MagicMock(return_value=DataFrame()))
timerange = TimeRange('date', None, 1510639620, 0)
load_pair_history(pair='UNITTEST/BTC', timeframe='1m',
@ -112,7 +112,7 @@ def test_load_data_with_new_pair_1min(ohlcv_history_list, mocker, caplog,
tmpdir1 = Path(tmpdir)
mocker.patch(f'{EXMS}.get_historic_ohlcv', return_value=ohlcv_history_list)
exchange = get_patched_exchange(mocker, default_conf)
file = tmpdir1 / 'MEME_BTC-1m.json'
file = tmpdir1 / 'MEME_BTC-1m.feather'
# do not download a new pair if refresh_pairs isn't set
load_pair_history(datadir=tmpdir1, timeframe='1m', pair='MEME/BTC', candle_type=candle_type)
@ -280,10 +280,10 @@ def test_download_pair_history(
mocker.patch(f'{EXMS}.get_historic_ohlcv', return_value=ohlcv_history_list)
exchange = get_patched_exchange(mocker, default_conf)
tmpdir1 = Path(tmpdir)
file1_1 = tmpdir1 / f'{subdir}MEME_BTC-1m{file_tail}.json'
file1_5 = tmpdir1 / f'{subdir}MEME_BTC-5m{file_tail}.json'
file2_1 = tmpdir1 / f'{subdir}CFI_BTC-1m{file_tail}.json'
file2_5 = tmpdir1 / f'{subdir}CFI_BTC-5m{file_tail}.json'
file1_1 = tmpdir1 / f'{subdir}MEME_BTC-1m{file_tail}.feather'
file1_5 = tmpdir1 / f'{subdir}MEME_BTC-5m{file_tail}.feather'
file2_1 = tmpdir1 / f'{subdir}CFI_BTC-1m{file_tail}.feather'
file2_5 = tmpdir1 / f'{subdir}CFI_BTC-5m{file_tail}.feather'
assert not file1_1.is_file()
assert not file2_1.is_file()
@ -326,7 +326,7 @@ def test_download_pair_history2(mocker, default_conf, testdatadir) -> None:
[1509836580000, 0.00161, 0.00161, 0.00161, 0.00161, 82.390199]
]
json_dump_mock = mocker.patch(
'freqtrade.data.history.jsondatahandler.JsonDataHandler.ohlcv_store',
'freqtrade.data.history.featherdatahandler.FeatherDataHandler.ohlcv_store',
return_value=None)
mocker.patch(f'{EXMS}.get_historic_ohlcv', return_value=tick)
exchange = get_patched_exchange(mocker, default_conf)
@ -386,7 +386,7 @@ def test_load_partial_missing(testdatadir, caplog) -> None:
def test_init(default_conf) -> None:
assert {} == load_data(
datadir=Path(''),
datadir=Path(),
pairs=[],
timeframe=default_conf['timeframe']
)
@ -395,14 +395,14 @@ def test_init(default_conf) -> None:
def test_init_with_refresh(default_conf, mocker) -> None:
exchange = get_patched_exchange(mocker, default_conf)
refresh_data(
datadir=Path(''),
datadir=Path(),
pairs=[],
timeframe=default_conf['timeframe'],
exchange=exchange,
candle_type=CandleType.SPOT
)
assert {} == load_data(
datadir=Path(''),
datadir=Path(),
pairs=[],
timeframe=default_conf['timeframe']
)
@ -627,8 +627,8 @@ def test_download_trades_history(trades_history, mocker, default_conf, testdatad
def test_convert_trades_to_ohlcv(testdatadir, tmpdir, caplog):
tmpdir1 = Path(tmpdir)
pair = 'XRP/ETH'
file1 = tmpdir1 / 'XRP_ETH-1m.json'
file5 = tmpdir1 / 'XRP_ETH-5m.json'
file1 = tmpdir1 / 'XRP_ETH-1m.feather'
file5 = tmpdir1 / 'XRP_ETH-5m.feather'
filetrades = tmpdir1 / 'XRP_ETH-trades.json.gz'
copyfile(testdatadir / file1.name, file1)
copyfile(testdatadir / file5.name, file5)
@ -641,6 +641,7 @@ def test_convert_trades_to_ohlcv(testdatadir, tmpdir, caplog):
tr = TimeRange.parse_timerange('20191011-20191012')
convert_trades_to_ohlcv([pair], timeframes=['1m', '5m'],
data_format_trades='jsongz',
datadir=tmpdir1, timerange=tr, erase=True)
assert log_has("Deleting existing data for pair XRP/ETH, interval 1m.", caplog)
@ -648,11 +649,12 @@ def test_convert_trades_to_ohlcv(testdatadir, tmpdir, caplog):
df_1m = load_pair_history(datadir=tmpdir1, timeframe="1m", pair=pair)
df_5m = load_pair_history(datadir=tmpdir1, timeframe="5m", pair=pair)
assert df_1m.equals(dfbak_1m)
assert df_5m.equals(dfbak_5m)
assert_frame_equal(dfbak_1m, df_1m, check_exact=True)
assert_frame_equal(dfbak_5m, df_5m, check_exact=True)
assert not log_has('Could not convert NoDatapair to OHLCV.', caplog)
convert_trades_to_ohlcv(['NoDatapair'], timeframes=['1m', '5m'],
data_format_trades='jsongz',
datadir=tmpdir1, timerange=tr, erase=True)
assert log_has('Could not convert NoDatapair to OHLCV.', caplog)

View File

@ -391,7 +391,7 @@ class TestCCXTExchange:
assert po['id'] is not None
if len(order.keys()) < 5:
# Kucoin case
assert po['status'] == 'closed'
assert po['status'] is None
continue
assert po['timestamp'] == 1674493798550
assert isinstance(po['datetime'], str)

View File

@ -4343,11 +4343,11 @@ def test__fetch_and_calculate_funding_fees(
ex = get_patched_exchange(mocker, default_conf, api_mock, id=exchange)
mocker.patch(f'{EXMS}.timeframes', PropertyMock(return_value=['1h', '4h', '8h']))
funding_fees = ex._fetch_and_calculate_funding_fees(
pair='ADA/USDT', amount=amount, is_short=True, open_date=d1, close_date=d2)
pair='ADA/USDT:USDT', amount=amount, is_short=True, open_date=d1, close_date=d2)
assert pytest.approx(funding_fees) == expected_fees
# Fees for Longs are inverted
funding_fees = ex._fetch_and_calculate_funding_fees(
pair='ADA/USDT', amount=amount, is_short=False, open_date=d1, close_date=d2)
pair='ADA/USDT:USDT', amount=amount, is_short=False, open_date=d1, close_date=d2)
assert pytest.approx(funding_fees) == -expected_fees
# Return empty "refresh_latest"
@ -4355,7 +4355,7 @@ def test__fetch_and_calculate_funding_fees(
ex = get_patched_exchange(mocker, default_conf, api_mock, id=exchange)
with pytest.raises(ExchangeError, match="Could not find funding rates."):
ex._fetch_and_calculate_funding_fees(
pair='ADA/USDT', amount=amount, is_short=False, open_date=d1, close_date=d2)
pair='ADA/USDT:USDT', amount=amount, is_short=False, open_date=d1, close_date=d2)
@pytest.mark.parametrize('exchange,expected_fees', [
@ -5424,7 +5424,7 @@ def test_stoploss_contract_size(mocker, default_conf, contract_size, order_amoun
assert api_mock.create_order.call_args_list[0][1]['amount'] == order_amount
assert order['amount'] == 100
assert order['cost'] == 100
assert order['cost'] == order_amount
assert order['filled'] == 100
assert order['remaining'] == 100

View File

@ -21,7 +21,7 @@ from freqtrade.data.history import get_timerange
from freqtrade.enums import CandleType, ExitType, RunMode
from freqtrade.exceptions import DependencyException, OperationalException
from freqtrade.exchange.exchange import timeframe_to_next_date
from freqtrade.optimize.backtest_caching import get_strategy_run_id
from freqtrade.optimize.backtest_caching import get_backtest_metadata_filename, get_strategy_run_id
from freqtrade.optimize.backtesting import Backtesting
from freqtrade.persistence import LocalTrade, Trade
from freqtrade.resolvers import StrategyResolver
@ -601,6 +601,9 @@ def test_backtest__enter_trade_futures(default_conf_usdt, fee, mocker) -> None:
trade = backtesting._enter_trade(pair, row=row, direction='short')
assert pytest.approx(trade.liquidation_price) == 0.11787191
assert pytest.approx(trade.orders[0].cost) == (
trade.stake_amount * trade.leverage + trade.fee_open)
assert pytest.approx(trade.orders[-1].stake_amount) == trade.stake_amount
# Stake-amount too high!
mocker.patch(f"{EXMS}.get_min_pair_stake_amount", return_value=600.0)
@ -1995,3 +1998,40 @@ def test_get_strategy_run_id(default_conf_usdt):
strategy = StrategyResolver.load_strategy(default_conf_usdt)
x = get_strategy_run_id(strategy)
assert isinstance(x, str)
def test_get_backtest_metadata_filename():
# Test with a file path
filename = Path('backtest_results.json')
expected = Path('backtest_results.meta.json')
assert get_backtest_metadata_filename(filename) == expected
# Test with a file path with multiple dots in the name
filename = Path('/path/to/backtest.results.json')
expected = Path('/path/to/backtest.results.meta.json')
assert get_backtest_metadata_filename(filename) == expected
# Test with a file path with no parent directory
filename = Path('backtest_results.json')
expected = Path('backtest_results.meta.json')
assert get_backtest_metadata_filename(filename) == expected
# Test with a string file path
filename = '/path/to/backtest_results.json'
expected = Path('/path/to/backtest_results.meta.json')
assert get_backtest_metadata_filename(filename) == expected
# Test with a string file path with no extension
filename = '/path/to/backtest_results'
expected = Path('/path/to/backtest_results.meta')
assert get_backtest_metadata_filename(filename) == expected
# Test with a string file path with multiple dots in the name
filename = '/path/to/backtest.results.json'
expected = Path('/path/to/backtest.results.meta.json')
assert get_backtest_metadata_filename(filename) == expected
# Test with a string file path with no parent directory
filename = 'backtest_results.json'
expected = Path('backtest_results.meta.json')
assert get_backtest_metadata_filename(filename) == expected

View File

@ -23,7 +23,8 @@ from freqtrade.optimize.optimize_reports import (generate_backtest_stats, genera
store_backtest_analysis_results,
store_backtest_stats, text_table_bt_results,
text_table_exit_reason, text_table_strategy)
from freqtrade.optimize.optimize_reports.optimize_reports import _get_resample_from_period
from freqtrade.optimize.optimize_reports.optimize_reports import (_get_resample_from_period,
calc_streak)
from freqtrade.resolvers.strategy_resolver import StrategyResolver
from freqtrade.util import dt_ts
from freqtrade.util.datetime_helpers import dt_from_ts, dt_utc
@ -212,7 +213,8 @@ def test_store_backtest_stats(testdatadir, mocker):
dump_mock = mocker.patch('freqtrade.optimize.optimize_reports.bt_storage.file_dump_json')
store_backtest_stats(testdatadir, {'metadata': {}}, '2022_01_01_15_05_13')
data = {'metadata': {}, 'strategy': {}, 'strategy_comparison': []}
store_backtest_stats(testdatadir, data, '2022_01_01_15_05_13')
assert dump_mock.call_count == 3
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
@ -220,7 +222,7 @@ def test_store_backtest_stats(testdatadir, mocker):
dump_mock.reset_mock()
filename = testdatadir / 'testresult.json'
store_backtest_stats(filename, {'metadata': {}}, '2022_01_01_15_05_13')
store_backtest_stats(filename, data, '2022_01_01_15_05_13')
assert dump_mock.call_count == 3
assert isinstance(dump_mock.call_args_list[0][0][0], Path)
# result will be testdatadir / testresult-<timestamp>.json
@ -348,6 +350,32 @@ def test_generate_trading_stats(testdatadir):
assert res['losses'] == 0
def test_calc_streak(testdatadir):
df = pd.DataFrame({
'profit_ratio': [0.05, -0.02, -0.03, -0.05, 0.01, 0.02, 0.03, 0.04, -0.02, -0.03],
})
# 4 consecutive wins, 3 consecutive losses
res = calc_streak(df)
assert res == (4, 3)
assert isinstance(res[0], int)
assert isinstance(res[1], int)
# invert situation
df1 = df.copy()
df1['profit_ratio'] = df1['profit_ratio'] * -1
assert calc_streak(df1) == (3, 4)
df_empty = pd.DataFrame({
'profit_ratio': [],
})
assert df_empty.empty
assert calc_streak(df_empty) == (0, 0)
filename = testdatadir / "backtest_results/backtest-result.json"
bt_data = load_backtest_data(filename)
assert calc_streak(bt_data) == (7, 18)
def test_text_table_exit_reason():
results = pd.DataFrame(

View File

@ -563,14 +563,14 @@ def test_calc_open_close_trade_price(
trade.open_order_id = f'something-{is_short}-{lev}-{exchange}'
oobj = Order.parse_from_ccxt_object(entry_order, 'ADA/USDT', trade.entry_side)
oobj.trade = trade
oobj._trade_live = trade
oobj.update_from_ccxt_object(entry_order)
trade.update_trade(oobj)
trade.funding_fees = funding_fees
oobj = Order.parse_from_ccxt_object(exit_order, 'ADA/USDT', trade.exit_side)
oobj.trade = trade
oobj._trade_live = trade
oobj.update_from_ccxt_object(exit_order)
trade.update_trade(oobj)

View File

@ -179,6 +179,7 @@ def test_trade_fromjson():
assert trade.open_date_utc == datetime(2022, 10, 18, 9, 12, 42, tzinfo=timezone.utc)
assert isinstance(trade.open_date, datetime)
assert trade.exit_reason == 'no longer good'
assert trade.realized_profit == 2.76315361
assert len(trade.orders) == 5
last_o = trade.orders[-1]

View File

@ -35,7 +35,7 @@ def test_gen_pairlist_with_local_file(mocker, rpl_config):
mock_file_path.exists.return_value = True
jsonparse = json.loads(mock_file.read.return_value)
mocker.patch('freqtrade.plugins.pairlist.RemotePairList.json.load', return_value=jsonparse)
mocker.patch('freqtrade.plugins.pairlist.RemotePairList.rapidjson.load', return_value=jsonparse)
rpl_config['pairlists'] = [
{

View File

@ -402,6 +402,8 @@ def test_rpc_trade_statistics(default_conf_usdt, ticker, fee, mocker) -> None:
assert res['first_trade_timestamp'] == 0
assert res['latest_trade_date'] == ''
assert res['latest_trade_timestamp'] == 0
assert res['expectancy'] == 0
assert res['expectancy_ratio'] == 100
# Create some test data
create_mock_trades_usdt(fee)
@ -413,6 +415,9 @@ def test_rpc_trade_statistics(default_conf_usdt, ticker, fee, mocker) -> None:
assert pytest.approx(stats['profit_all_coin']) == -77.45964918
assert pytest.approx(stats['profit_all_percent_mean']) == -57.86
assert pytest.approx(stats['profit_all_fiat']) == -85.205614098
assert pytest.approx(stats['winrate']) == 0.666666667
assert pytest.approx(stats['expectancy']) == 0.913333333
assert pytest.approx(stats['expectancy_ratio']) == 0.223308883
assert stats['trade_count'] == 7
assert stats['first_trade_humanized'] == '2 days ago'
assert stats['latest_trade_humanized'] == '17 minutes ago'

Some files were not shown because too many files have changed in this diff Show More