Merge branch 'develop' into feat/stoploss_adjust

This commit is contained in:
Matthias 2023-08-29 07:04:08 +02:00
commit 1a8b793c0a
96 changed files with 1953 additions and 948 deletions

View File

@ -10,7 +10,7 @@ updates:
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
open-pull-requests-limit: 15
target-branch: develop
- package-ecosystem: "github-actions"

View File

@ -8,7 +8,7 @@ repos:
# stages: [push]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: "v1.5.0"
rev: "v1.5.1"
hooks:
- id: mypy
exclude: build_helpers
@ -18,7 +18,7 @@ repos:
- types-requests==2.31.0.2
- types-tabulate==0.9.0.3
- types-python-dateutil==2.8.19.14
- SQLAlchemy==2.0.19
- SQLAlchemy==2.0.20
# stages: [push]
- repo: https://github.com/pycqa/isort

View File

@ -1,8 +1,14 @@
# .readthedocs.yml
version: 2
build:
image: latest
os: "ubuntu-22.04"
tools:
python: "3.11"
python:
version: 3.8
setup_py_install: false
install:
- requirements: docs/requirements-docs.txt
mkdocs:
configuration: mkdocs.yml

View File

@ -1,4 +1,4 @@
FROM python:3.11.4-slim-bullseye as base
FROM python:3.11.5-slim-bullseye as base
# Setup env
ENV LANG C.UTF-8

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -8,8 +8,9 @@ if [ -n "$2" ] || [ ! -f "${INSTALL_LOC}/lib/libta_lib.a" ]; then
tar zxvf ta-lib-0.4.0-src.tar.gz
cd ta-lib \
&& sed -i.bak "s|0.00000001|0.000000000000000001 |g" src/ta_func/ta_utility.h \
&& curl 'https://raw.githubusercontent.com/gcc-mirror/gcc/master/config.guess' -o config.guess \
&& curl 'https://raw.githubusercontent.com/gcc-mirror/gcc/master/config.sub' -o config.sub \
&& echo "Downloading gcc config.guess and config.sub" \
&& curl -s 'https://raw.githubusercontent.com/gcc-mirror/gcc/master/config.guess' -o config.guess \
&& curl -s 'https://raw.githubusercontent.com/gcc-mirror/gcc/master/config.sub' -o config.sub \
&& ./configure --prefix=${INSTALL_LOC}/ \
&& make
if [ $? -ne 0 ]; then

View File

@ -5,7 +5,7 @@ python -m pip install --upgrade pip wheel
$pyv = python -c "import sys; print(f'{sys.version_info.major}.{sys.version_info.minor}')"
pip install --find-links=build_helpers\ TA-Lib
pip install --find-links=build_helpers\ --prefer-binary TA-Lib
pip install -r requirements-dev.txt
pip install -e .

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@ -7,7 +7,7 @@ This page provides you some basic concepts on how Freqtrade works and operates.
* **Strategy**: Your trading strategy, telling the bot what to do.
* **Trade**: Open position.
* **Open Order**: Order which is currently placed on the exchange, and is not yet complete.
* **Pair**: Tradable pair, usually in the format of Base/Quote (e.g. XRP/USDT).
* **Pair**: Tradable pair, usually in the format of Base/Quote (e.g. `XRP/USDT` for spot, `XRP/USDT:USDT` for futures).
* **Timeframe**: Candle length to use (e.g. `"5m"`, `"1h"`, ...).
* **Indicators**: Technical indicators (SMA, EMA, RSI, ...).
* **Limit order**: Limit orders which execute at the defined limit price or better.
@ -20,6 +20,20 @@ This page provides you some basic concepts on how Freqtrade works and operates.
All profit calculations of Freqtrade include fees. For Backtesting / Hyperopt / Dry-run modes, the exchange default fee is used (lowest tier on the exchange). For live operations, fees are used as applied by the exchange (this includes BNB rebates etc.).
## Pair naming
Freqtrade follows the [ccxt naming convention](https://docs.ccxt.com/#/README?id=consistency-of-base-and-quote-currencies) for currencies.
Using the wrong naming convention in the wrong market will usually result in the bot not recognizing the pair, usually resulting in errors like "this pair is not available".
### Spot pair naming
For spot pairs, naming will be `base/quote` (e.g. `ETH/USDT`).
### Futures pair naming
For futures pairs, naming will be `base/quote:settle` (e.g. `ETH/USDT:USDT`).
## Bot execution logic
Starting freqtrade in dry-run or live mode (using `freqtrade trade`) will start the bot and start the bot iteration loop.

View File

@ -3,7 +3,7 @@
This page explains the different parameters of the bot and how to run it.
!!! Note
If you've used `setup.sh`, don't forget to activate your virtual environment (`source .env/bin/activate`) before running freqtrade commands.
If you've used `setup.sh`, don't forget to activate your virtual environment (`source .venv/bin/activate`) before running freqtrade commands.
!!! Warning "Up-to-date clock"
The clock on the system running the bot must be accurate, synchronized to a NTP server frequently enough to avoid problems with communication to the exchanges.

View File

@ -27,7 +27,7 @@ For this to work, first activate your virtual environment and run the following
``` bash
# Activate virtual environment
source .env/bin/activate
source .venv/bin/activate
pip install ipykernel
ipython kernel install --user --name=freqtrade

View File

@ -77,7 +77,7 @@ def test_method_to_test(caplog):
### Debug configuration
To debug freqtrade, we recommend VSCode with the following launch configuration (located in `.vscode/launch.json`).
To debug freqtrade, we recommend VSCode (with the Python extension) with the following launch configuration (located in `.vscode/launch.json`).
Details will obviously vary between setups - but this should work to get you started.
``` json
@ -102,6 +102,19 @@ This method can also be used to debug a strategy, by setting the breakpoints wit
A similar setup can also be taken for Pycharm - using `freqtrade` as module name, and setting the command line arguments as "parameters".
??? Tip "Correct venv usage"
When using a virtual environment (which you should), make sure that your Editor is using the correct virtual environment to avoid problems or "unknown import" errors.
#### Vscode
You can select the correct environment in VSCode with the command "Python: Select Interpreter" - which will show you environments the extension detected.
If your environment has not been detected, you can also pick a path manually.
#### Pycharm
In pycharm, you can select the appropriate Environment in the "Run/Debug Configurations" window.
![Pycharm debug configuration](assets/pycharm_debug.png)
!!! Note "Startup directory"
This assumes that you have the repository checked out, and the editor is started at the repository root level (so setup.py is at the top level of your repository).

View File

@ -2,6 +2,10 @@
The `Edge Positioning` module uses probability to calculate your win rate and risk reward ratio. It will use these statistics to control your strategy trade entry points, position size and, stoploss.
!!! Danger "Deprecated functionality"
`Edge positioning` (or short Edge) is currently in maintenance mode only (we keep existing functionality alive) and should be considered as deprecated.
It will currently not receive new features until either someone stepped forward to take up ownership of that module - or we'll decide to remove edge from freqtrade.
!!! Warning
When using `Edge positioning` with a dynamic whitelist (VolumePairList), make sure to also use `AgeFilter` and set it to at least `calculate_since_number_of_days` to avoid problems with missing data.

View File

@ -36,7 +36,7 @@ Running the bot with `freqtrade trade --config config.json` shows the output `fr
This could be caused by the following reasons:
* The virtual environment is not active.
* Run `source .env/bin/activate` to activate the virtual environment.
* Run `source .venv/bin/activate` to activate the virtual environment.
* The installation did not complete successfully.
* Please check the [Installation documentation](installation.md).

View File

@ -101,11 +101,11 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
#### trainer_kwargs
| Parameter | Description |
|------------|-------------|
|--------------|-------------|
| | **Model training parameters within the `freqai.model_training_parameters.model_kwargs` sub dictionary**
| `max_iters` | The number of training iterations to run. iteration here refers to the number of times we call self.optimizer.step(). used to calculate n_epochs. <br> **Datatype:** int. <br> Default: `100`.
| `batch_size` | The size of the batches to use during training.. <br> **Datatype:** int. <br> Default: `64`.
| `max_n_eval_batches` | The maximum number batches to use for evaluation.. <br> **Datatype:** int, optional. <br> Default: `None`.
| `n_epochs` | The `n_epochs` parameter is a crucial setting in the PyTorch training loop that determines the number of times the entire training dataset will be used to update the model's parameters. An epoch represents one full pass through the entire training dataset. Overrides `n_steps`. Either `n_epochs` or `n_steps` must be set. <br><br> **Datatype:** int. optional. <br> Default: `10`.
| `n_steps` | An alternative way of setting `n_epochs` - the number of training iterations to run. Iteration here refer to the number of times we call `optimizer.step()`. Ignored if `n_epochs` is set. A simplified version of the function: <br><br> n_epochs = n_steps / (n_obs / batch_size) <br><br> The motivation here is that `n_steps` is easier to optimize and keep stable across different n_obs - the number of data points. <br> <br> **Datatype:** int. optional. <br> Default: `None`.
| `batch_size` | The size of the batches to use during training. <br><br> **Datatype:** int. <br> Default: `64`.
### Additional parameters

View File

@ -20,7 +20,7 @@ With the current framework, we aim to expose the training environment via the co
We envision the majority of users focusing their effort on creative design of the `calculate_reward()` function [details here](#creating-a-custom-reward-function), while leaving the rest of the environment untouched. Other users may not touch the environment at all, and they will only play with the configuration settings and the powerful feature engineering that already exists in FreqAI. Meanwhile, we enable advanced users to create their own model classes entirely.
The framework is built on stable_baselines3 (torch) and OpenAI gym for the base environment class. But generally speaking, the model class is well isolated. Thus, the addition of competing libraries can be easily integrated into the existing framework. For the environment, it is inheriting from `gym.env` which means that it is necessary to write an entirely new environment in order to switch to a different library.
The framework is built on stable_baselines3 (torch) and OpenAI gym for the base environment class. But generally speaking, the model class is well isolated. Thus, the addition of competing libraries can be easily integrated into the existing framework. For the environment, it is inheriting from `gym.Env` which means that it is necessary to write an entirely new environment in order to switch to a different library.
### Important considerations
@ -173,7 +173,7 @@ class MyCoolRLModel(ReinforcementLearner):
"""
class MyRLEnv(Base5ActionRLEnv):
"""
User made custom environment. This class inherits from BaseEnvironment and gym.env.
User made custom environment. This class inherits from BaseEnvironment and gym.Env.
Users can override any functions from those parent classes. Here is an example
of a user customized `calculate_reward()` function.
@ -254,7 +254,7 @@ FreqAI also provides a built in episodic summary logger called `self.tensorboard
```python
class MyRLEnv(Base5ActionRLEnv):
"""
User made custom environment. This class inherits from BaseEnvironment and gym.env.
User made custom environment. This class inherits from BaseEnvironment and gym.Env.
Users can override any functions from those parent classes. Here is an example
of a user customized `calculate_reward()` function.
"""

View File

@ -31,7 +31,7 @@ The docker-image includes hyperopt dependencies, no further action needed.
### Easy installation script (setup.sh) / Manual installation
```bash
source .env/bin/activate
source .venv/bin/activate
pip install -r requirements-hyperopt.txt
```

View File

@ -143,11 +143,11 @@ If you are on Debian, Ubuntu or MacOS, freqtrade provides the script to install
### Activate your virtual environment
Each time you open a new terminal, you must run `source .env/bin/activate` to activate your virtual environment.
Each time you open a new terminal, you must run `source .venv/bin/activate` to activate your virtual environment.
```bash
# then activate your .env
source ./.env/bin/activate
# activate virtual environment
source ./.venv/bin/activate
```
### Congratulations
@ -172,7 +172,7 @@ With this option, the script will install the bot and most dependencies:
You will need to have git and python3.8+ installed beforehand for this to work.
* Mandatory software as: `ta-lib`
* Setup your virtualenv under `.env/`
* Setup your virtualenv under `.venv/`
This option is a combination of installation tasks and `--reset`
@ -225,11 +225,11 @@ rm -rf ./ta-lib*
You will run freqtrade in separated `virtual environment`
```bash
# create virtualenv in directory /freqtrade/.env
python3 -m venv .env
# create virtualenv in directory /freqtrade/.venv
python3 -m venv .venv
# run virtualenv
source .env/bin/activate
source .venv/bin/activate
```
#### Install python dependencies
@ -286,7 +286,7 @@ cd freqtrade
#### Freqtrade install: Conda Environment
```bash
conda create --name freqtrade python=3.10
conda create --name freqtrade python=3.11
```
!!! Note "Creating Conda Environment"
@ -383,7 +383,7 @@ You've made it this far, so you have successfully installed freqtrade.
freqtrade create-userdir --userdir user_data
# Step 2 - Create a new configuration file
freqtrade new-config --config config.json
freqtrade new-config --config user_data/config.json
```
You are ready to run, read [Bot Configuration](configuration.md), remember to start with `dry_run: True` and verify that everything is working.
@ -393,7 +393,7 @@ To learn how to setup your configuration, please refer to the [Bot Configuration
### Start the Bot
```bash
freqtrade trade --config config.json --strategy SampleStrategy
freqtrade trade --config user_data/config.json --strategy SampleStrategy
```
!!! Warning
@ -411,8 +411,8 @@ If you used (1)`Script` or (2)`Manual` installation, you need to run the bot in
# if:
bash: freqtrade: command not found
# then activate your .env
source ./.env/bin/activate
# then activate your virtual environment
source ./.venv/bin/activate
```
### MacOS installation error

View File

@ -64,7 +64,7 @@ You will also have to pick a "margin mode" (explanation below) - with freqtrade
##### Pair namings
Freqtrade follows the [ccxt naming conventions for futures](https://docs.ccxt.com/en/latest/manual.html?#perpetual-swap-perpetual-future).
Freqtrade follows the [ccxt naming conventions for futures](https://docs.ccxt.com/#/README?id=perpetual-swap-perpetual-future).
A futures pair will therefore have the naming of `base/quote:settle` (e.g. `ETH/USDT:USDT`).
### Margin mode

View File

@ -1,6 +1,6 @@
markdown==3.4.4
mkdocs==1.5.2
mkdocs-material==9.1.21
mkdocs-material==9.2.5
mdx_truly_sane_lists==1.3
pymdown-extensions==10.1
jinja2==3.1.2

View File

@ -31,8 +31,8 @@ Other versions must be downloaded from the above link.
``` powershell
cd \path\freqtrade
python -m venv .env
.env\Scripts\activate.ps1
python -m venv .venv
.venv\Scripts\activate.ps1
# optionally install ta-lib from wheel
# Eventually adjust the below filename to match the downloaded wheel
pip install --find-links build_helpers\ TA-Lib -U

View File

@ -1,5 +1,5 @@
""" Freqtrade bot """
__version__ = '2023.8.dev'
__version__ = '2023.9-dev'
if 'dev' in __version__:
from pathlib import Path

View File

@ -10,7 +10,7 @@ from freqtrade.configuration.directory_operations import chown_user_directory
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
from freqtrade.exceptions import OperationalException
from freqtrade.exchange import MAP_EXCHANGE_CHILDCLASS, available_exchanges
from freqtrade.misc import render_template
from freqtrade.util import render_template
logger = logging.getLogger(__name__)
@ -105,7 +105,7 @@ def ask_user_config() -> Dict[str, Any]:
"type": "select",
"name": "exchange_name",
"message": "Select exchange",
"choices": lambda x: [
"choices": [
"binance",
"binanceus",
"bittrex",

View File

@ -441,7 +441,7 @@ AVAILABLE_CLI_OPTIONS = {
"dataformat_trades": Arg(
'--data-format-trades',
help='Storage format for downloaded trades data. (default: `feather`).',
choices=constants.AVAILABLE_DATAHANDLERS_TRADES,
choices=constants.AVAILABLE_DATAHANDLERS,
),
"show_timerange": Arg(
'--show-timerange',

View File

@ -10,7 +10,7 @@ from freqtrade.configuration.directory_operations import copy_sample_files, crea
from freqtrade.constants import USERPATH_STRATEGIES
from freqtrade.enums import RunMode
from freqtrade.exceptions import OperationalException
from freqtrade.misc import render_template, render_template_with_fallback
from freqtrade.util import render_template, render_template_with_fallback
logger = logging.getLogger(__name__)
@ -35,6 +35,10 @@ def deploy_new_strategy(strategy_name: str, strategy_path: Path, subtemplate: st
Deploy new strategy from template to strategy_path
"""
fallback = 'full'
attributes = render_template_with_fallback(
templatefile=f"strategy_subtemplates/strategy_attributes_{subtemplate}.j2",
templatefallbackfile=f"strategy_subtemplates/strategy_attributes_{fallback}.j2",
)
indicators = render_template_with_fallback(
templatefile=f"strategy_subtemplates/indicators_{subtemplate}.j2",
templatefallbackfile=f"strategy_subtemplates/indicators_{fallback}.j2",
@ -58,6 +62,7 @@ def deploy_new_strategy(strategy_name: str, strategy_path: Path, subtemplate: st
strategy_text = render_template(templatefile='base_strategy.py.j2',
arguments={"strategy": strategy_name,
"attributes": attributes,
"indicators": indicators,
"buy_trend": buy_trend,
"sell_trend": sell_trend,

View File

@ -7,9 +7,10 @@ def start_webserver(args: Dict[str, Any]) -> None:
"""
Main entry point for webserver mode
"""
from freqtrade.configuration import Configuration
from freqtrade.configuration import setup_utils_configuration
from freqtrade.rpc.api_server import ApiServer
# Initialize configuration
config = Configuration(args, RunMode.WEBSERVER).get_config()
config = setup_utils_configuration(args, RunMode.WEBSERVER)
ApiServer(config, standalone=True)

View File

@ -51,6 +51,8 @@ def validate_config_schema(conf: Dict[str, Any], preliminary: bool = False) -> D
conf_schema['required'] = constants.SCHEMA_BACKTEST_REQUIRED
else:
conf_schema['required'] = constants.SCHEMA_BACKTEST_REQUIRED_FINAL
elif conf.get('runmode', RunMode.OTHER) == RunMode.WEBSERVER:
conf_schema['required'] = constants.SCHEMA_MINIMAL_WEBSERVER
else:
conf_schema['required'] = constants.SCHEMA_MINIMAL_REQUIRED
try:

View File

@ -41,7 +41,7 @@ def flat_vars_to_nested_dict(env_dict: Dict[str, Any], prefix: str) -> Dict[str,
key = env_var.replace(prefix, '')
for k in reversed(key.split('__')):
val = {k.lower(): get_var_typed(val)
if type(val) != dict and k not in no_convert else val}
if not isinstance(val, dict) and k not in no_convert else val}
relevant_vars = deep_merge_dicts(val, relevant_vars)
return relevant_vars

View File

@ -38,8 +38,7 @@ AVAILABLE_PAIRLISTS = ['StaticPairList', 'VolumePairList', 'ProducerPairList', '
'ShuffleFilter', 'SpreadFilter', 'VolatilityFilter']
AVAILABLE_PROTECTIONS = ['CooldownPeriod',
'LowProfitPairs', 'MaxDrawdown', 'StoplossGuard']
AVAILABLE_DATAHANDLERS_TRADES = ['json', 'jsongz', 'hdf5', 'feather']
AVAILABLE_DATAHANDLERS = AVAILABLE_DATAHANDLERS_TRADES + ['parquet']
AVAILABLE_DATAHANDLERS = ['json', 'jsongz', 'hdf5', 'feather', 'parquet']
BACKTEST_BREAKDOWNS = ['day', 'week', 'month']
BACKTEST_CACHE_AGE = ['none', 'day', 'week', 'month']
BACKTEST_CACHE_DEFAULT = 'day'
@ -50,6 +49,15 @@ DEFAULT_DATAFRAME_COLUMNS = ['date', 'open', 'high', 'low', 'close', 'volume']
# Don't modify sequence of DEFAULT_TRADES_COLUMNS
# it has wide consequences for stored trades files
DEFAULT_TRADES_COLUMNS = ['timestamp', 'id', 'type', 'side', 'price', 'amount', 'cost']
TRADES_DTYPES = {
'timestamp': 'int64',
'id': 'str',
'type': 'str',
'side': 'str',
'price': 'float64',
'amount': 'float64',
'cost': 'float64',
}
TRADING_MODES = ['spot', 'margin', 'futures']
MARGIN_MODES = ['cross', 'isolated', '']
@ -450,7 +458,7 @@ CONF_SCHEMA = {
},
'dataformat_trades': {
'type': 'string',
'enum': AVAILABLE_DATAHANDLERS_TRADES,
'enum': AVAILABLE_DATAHANDLERS,
'default': 'feather'
},
'position_adjustment_enable': {'type': 'boolean'},
@ -667,6 +675,9 @@ SCHEMA_MINIMAL_REQUIRED = [
'dataformat_ohlcv',
'dataformat_trades',
]
SCHEMA_MINIMAL_WEBSERVER = SCHEMA_MINIMAL_REQUIRED + [
'api_server',
]
CANCEL_REASON = {
"TIMEOUT": "cancelled due to timeout",

View File

@ -1,16 +1,15 @@
"""
Functions to convert data from one format to another
"""
import itertools
import logging
from operator import itemgetter
from typing import Dict, List
import numpy as np
import pandas as pd
from pandas import DataFrame, to_datetime
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS, Config, TradeList
from freqtrade.constants import (DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS, TRADES_DTYPES,
Config, TradeList)
from freqtrade.enums import CandleType, TradingMode
@ -195,15 +194,14 @@ def order_book_to_dataframe(bids: list, asks: list) -> DataFrame:
return frame
def trades_remove_duplicates(trades: List[List]) -> List[List]:
def trades_df_remove_duplicates(trades: pd.DataFrame) -> pd.DataFrame:
"""
Removes duplicates from the trades list.
Uses itertools.groupby to avoid converting to pandas.
Tests show it as being pretty efficient on lists of 4M Lists.
:param trades: List of Lists with constants.DEFAULT_TRADES_COLUMNS as columns
:return: same format as above, but with duplicates removed
Removes duplicates from the trades DataFrame.
Uses pandas.DataFrame.drop_duplicates to remove duplicates based on the 'timestamp' column.
:param trades: DataFrame with the columns constants.DEFAULT_TRADES_COLUMNS
:return: DataFrame with duplicates removed based on the 'timestamp' column
"""
return [i for i, _ in itertools.groupby(sorted(trades, key=itemgetter(0)))]
return trades.drop_duplicates(subset=['timestamp', 'id'])
def trades_dict_to_list(trades: List[Dict]) -> TradeList:
@ -215,7 +213,32 @@ def trades_dict_to_list(trades: List[Dict]) -> TradeList:
return [[t[col] for col in DEFAULT_TRADES_COLUMNS] for t in trades]
def trades_to_ohlcv(trades: TradeList, timeframe: str) -> DataFrame:
def trades_convert_types(trades: DataFrame) -> DataFrame:
"""
Convert Trades dtypes and add 'date' column
"""
trades = trades.astype(TRADES_DTYPES)
trades['date'] = to_datetime(trades['timestamp'], unit='ms', utc=True)
return trades
def trades_list_to_df(trades: TradeList, convert: bool = True):
"""
convert trades list to dataframe
:param trades: List of Lists with constants.DEFAULT_TRADES_COLUMNS as columns
"""
if not trades:
df = DataFrame(columns=DEFAULT_TRADES_COLUMNS)
else:
df = DataFrame(trades, columns=DEFAULT_TRADES_COLUMNS)
if convert:
df = trades_convert_types(df)
return df
def trades_to_ohlcv(trades: DataFrame, timeframe: str) -> DataFrame:
"""
Converts trades list to OHLCV list
:param trades: List of trades, as returned by ccxt.fetch_trades.
@ -225,12 +248,9 @@ def trades_to_ohlcv(trades: TradeList, timeframe: str) -> DataFrame:
"""
from freqtrade.exchange import timeframe_to_minutes
timeframe_minutes = timeframe_to_minutes(timeframe)
if not trades:
if trades.empty:
raise ValueError('Trade-list empty.')
df = pd.DataFrame(trades, columns=DEFAULT_TRADES_COLUMNS)
df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms',
utc=True,)
df = df.set_index('timestamp')
df = trades.set_index('date', drop=True)
df_new = df['price'].resample(f'{timeframe_minutes}min').ohlc()
df_new['volume'] = df['amount'].resample(f'{timeframe_minutes}min').sum()

View File

@ -17,7 +17,7 @@ from freqtrade.constants import (FULL_DATAFRAME_THRESHOLD, Config, ListPairsWith
from freqtrade.data.history import load_pair_history
from freqtrade.enums import CandleType, RPCMessageType, RunMode
from freqtrade.exceptions import ExchangeError, OperationalException
from freqtrade.exchange import Exchange, timeframe_to_seconds
from freqtrade.exchange import Exchange, timeframe_to_prev_date, timeframe_to_seconds
from freqtrade.exchange.types import OrderBook
from freqtrade.misc import append_candles_to_dataframe
from freqtrade.rpc import RPCManager
@ -46,6 +46,8 @@ class DataProvider:
self.__rpc = rpc
self.__cached_pairs: Dict[PairWithTimeframe, Tuple[DataFrame, datetime]] = {}
self.__slice_index: Optional[int] = None
self.__slice_date: Optional[datetime] = None
self.__cached_pairs_backtesting: Dict[PairWithTimeframe, DataFrame] = {}
self.__producer_pairs_df: Dict[str,
Dict[PairWithTimeframe, Tuple[DataFrame, datetime]]] = {}
@ -64,10 +66,19 @@ class DataProvider:
def _set_dataframe_max_index(self, limit_index: int):
"""
Limit analyzed dataframe to max specified index.
Only relevant in backtesting.
:param limit_index: dataframe index.
"""
self.__slice_index = limit_index
def _set_dataframe_max_date(self, limit_date: datetime):
"""
Limit infomrative dataframe to max specified index.
Only relevant in backtesting.
:param limit_date: "current date"
"""
self.__slice_date = limit_date
def _set_cached_df(
self,
pair: str,
@ -284,7 +295,7 @@ class DataProvider:
def historic_ohlcv(
self,
pair: str,
timeframe: Optional[str] = None,
timeframe: str,
candle_type: str = ''
) -> DataFrame:
"""
@ -307,7 +318,7 @@ class DataProvider:
timerange.subtract_start(tf_seconds * startup_candles)
self.__cached_pairs_backtesting[saved_pair] = load_pair_history(
pair=pair,
timeframe=timeframe or self._config['timeframe'],
timeframe=timeframe,
datadir=self._config['datadir'],
timerange=timerange,
data_format=self._config['dataformat_ohlcv'],
@ -354,7 +365,13 @@ class DataProvider:
data = self.ohlcv(pair=pair, timeframe=timeframe, candle_type=candle_type)
else:
# Get historical OHLCV data (cached on disk).
timeframe = timeframe or self._config['timeframe']
data = self.historic_ohlcv(pair=pair, timeframe=timeframe, candle_type=candle_type)
# Cut date to timeframe-specific date.
# This is necessary to prevent lookahead bias in callbacks through informative pairs.
if self.__slice_date:
cutoff_date = timeframe_to_prev_date(timeframe, self.__slice_date)
data = data.loc[data['date'] < cutoff_date]
if len(data) == 0:
logger.warning(f"No data found for ({pair}, {timeframe}, {candle_type}).")
return data

View File

@ -4,7 +4,7 @@ from typing import Optional
from pandas import DataFrame, read_feather, to_datetime
from freqtrade.configuration import TimeRange
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS, TradeList
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS
from freqtrade.enums import CandleType
from .idatahandler import IDataHandler
@ -82,43 +82,41 @@ class FeatherDataHandler(IDataHandler):
"""
raise NotImplementedError()
def trades_store(self, pair: str, data: TradeList) -> None:
def _trades_store(self, pair: str, data: DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
filename = self._pair_trades_filename(self._datadir, pair)
self.create_dir_if_needed(filename)
data.reset_index(drop=True).to_feather(filename, compression_level=9, compression='lz4')
tradesdata = DataFrame(data, columns=DEFAULT_TRADES_COLUMNS)
tradesdata.to_feather(filename, compression_level=9, compression='lz4')
def trades_append(self, pair: str, data: TradeList):
def trades_append(self, pair: str, data: DataFrame):
"""
Append data to existing files
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
raise NotImplementedError()
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> TradeList:
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> DataFrame:
"""
Load a pair from file, either .json.gz or .json
# TODO: respect timerange ...
:param pair: Load trades for this pair
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
:return: Dataframe containing trades
"""
filename = self._pair_trades_filename(self._datadir, pair)
if not filename.exists():
return []
return DataFrame(columns=DEFAULT_TRADES_COLUMNS)
tradesdata = read_feather(filename)
return tradesdata.values.tolist()
return tradesdata
@classmethod
def _get_file_extension(cls):

View File

@ -5,7 +5,7 @@ import numpy as np
import pandas as pd
from freqtrade.configuration import TimeRange
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS, TradeList
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS
from freqtrade.enums import CandleType
from .idatahandler import IDataHandler
@ -100,42 +100,42 @@ class HDF5DataHandler(IDataHandler):
"""
raise NotImplementedError()
def trades_store(self, pair: str, data: TradeList) -> None:
def _trades_store(self, pair: str, data: pd.DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
key = self._pair_trades_key(pair)
pd.DataFrame(data, columns=DEFAULT_TRADES_COLUMNS).to_hdf(
data.to_hdf(
self._pair_trades_filename(self._datadir, pair), key,
mode='a', complevel=9, complib='blosc',
format='table', data_columns=['timestamp']
)
def trades_append(self, pair: str, data: TradeList):
def trades_append(self, pair: str, data: pd.DataFrame):
"""
Append data to existing files
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
raise NotImplementedError()
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> TradeList:
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> pd.DataFrame:
"""
Load a pair from h5 file.
:param pair: Load trades for this pair
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
:return: Dataframe containing trades
"""
key = self._pair_trades_key(pair)
filename = self._pair_trades_filename(self._datadir, pair)
if not filename.exists():
return []
return pd.DataFrame(columns=DEFAULT_TRADES_COLUMNS)
where = []
if timerange:
if timerange.starttype == 'date':
@ -145,7 +145,7 @@ class HDF5DataHandler(IDataHandler):
trades: pd.DataFrame = pd.read_hdf(filename, key=key, mode="r", where=where)
trades[['id', 'type']] = trades[['id', 'type']].replace({np.nan: None})
return trades.values.tolist()
return trades
@classmethod
def _get_file_extension(cls):

View File

@ -10,14 +10,16 @@ from freqtrade.configuration import TimeRange
from freqtrade.constants import (DATETIME_PRINT_FORMAT, DEFAULT_DATAFRAME_COLUMNS,
DL_DATA_TIMEFRAMES, Config)
from freqtrade.data.converter import (clean_ohlcv_dataframe, ohlcv_to_dataframe,
trades_remove_duplicates, trades_to_ohlcv)
trades_df_remove_duplicates, trades_list_to_df,
trades_to_ohlcv)
from freqtrade.data.history.idatahandler import IDataHandler, get_datahandler
from freqtrade.enums import CandleType
from freqtrade.exceptions import OperationalException
from freqtrade.exchange import Exchange
from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist
from freqtrade.util import format_ms_time
from freqtrade.util import dt_ts, format_ms_time
from freqtrade.util.binance_mig import migrate_binance_futures_data
from freqtrade.util.datetime_helpers import dt_now
logger = logging.getLogger(__name__)
@ -349,24 +351,27 @@ def _download_trades_history(exchange: Exchange,
# DEFAULT_TRADES_COLUMNS: 0 -> timestamp
# DEFAULT_TRADES_COLUMNS: 1 -> id
if trades and since < trades[0][0]:
if not trades.empty and since > 0 and since < trades.iloc[0]['timestamp']:
# since is before the first trade
logger.info(f"Start earlier than available data. Redownloading trades for {pair}...")
trades = []
logger.info(f"Start ({trades.iloc[0]['date']:{DATETIME_PRINT_FORMAT}}) earlier than "
f"available data. Redownloading trades for {pair}...")
trades = trades_list_to_df([])
if not since:
since = int((datetime.now() - timedelta(days=new_pairs_days)).timestamp()) * 1000
from_id = trades[-1][1] if trades else None
if trades and since < trades[-1][0]:
from_id = trades.iloc[-1]['id'] if not trades.empty else None
if not trades.empty and since < trades.iloc[-1]['timestamp']:
# Reset since to the last available point
# - 5 seconds (to ensure we're getting all trades)
since = trades[-1][0] - (5 * 1000)
since = trades.iloc[-1]['timestamp'] - (5 * 1000)
logger.info(f"Using last trade date -5s - Downloading trades for {pair} "
f"since: {format_ms_time(since)}.")
logger.debug(f"Current Start: {format_ms_time(trades[0][0]) if trades else 'None'}")
logger.debug(f"Current End: {format_ms_time(trades[-1][0]) if trades else 'None'}")
if not since:
since = dt_ts(dt_now() - timedelta(days=new_pairs_days))
logger.debug("Current Start: %s", 'None' if trades.empty else
f"{trades.iloc[0]['date']:{DATETIME_PRINT_FORMAT}}")
logger.debug("Current End: %s", 'None' if trades.empty else
f"{trades.iloc[-1]['date']:{DATETIME_PRINT_FORMAT}}")
logger.info(f"Current Amount of trades: {len(trades)}")
# Default since_ms to 30 days if nothing is given
@ -375,13 +380,16 @@ def _download_trades_history(exchange: Exchange,
until=until,
from_id=from_id,
)
trades.extend(new_trades[1])
new_trades_df = trades_list_to_df(new_trades[1])
trades = concat([trades, new_trades_df], axis=0)
# Remove duplicates to make sure we're not storing data we don't need
trades = trades_remove_duplicates(trades)
trades = trades_df_remove_duplicates(trades)
data_handler.trades_store(pair, data=trades)
logger.debug(f"New Start: {format_ms_time(trades[0][0])}")
logger.debug(f"New End: {format_ms_time(trades[-1][0])}")
logger.debug("New Start: %s", 'None' if trades.empty else
f"{trades.iloc[0]['date']:{DATETIME_PRINT_FORMAT}}")
logger.debug("New End: %s", 'None' if trades.empty else
f"{trades.iloc[-1]['date']:{DATETIME_PRINT_FORMAT}}")
logger.info(f"New Amount of trades: {len(trades)}")
return True

View File

@ -15,8 +15,9 @@ from pandas import DataFrame
from freqtrade import misc
from freqtrade.configuration import TimeRange
from freqtrade.constants import ListPairsWithTimeframes, TradeList
from freqtrade.data.converter import clean_ohlcv_dataframe, trades_remove_duplicates, trim_dataframe
from freqtrade.constants import DEFAULT_TRADES_COLUMNS, ListPairsWithTimeframes
from freqtrade.data.converter import (clean_ohlcv_dataframe, trades_convert_types,
trades_df_remove_duplicates, trim_dataframe)
from freqtrade.enums import CandleType, TradingMode
from freqtrade.exchange import timeframe_to_seconds
@ -170,32 +171,42 @@ class IDataHandler(ABC):
return [cls.rebuild_pair_from_filename(match[0]) for match in _tmp if match]
@abstractmethod
def trades_store(self, pair: str, data: TradeList) -> None:
def _trades_store(self, pair: str, data: DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
@abstractmethod
def trades_append(self, pair: str, data: TradeList):
def trades_append(self, pair: str, data: DataFrame):
"""
Append data to existing files
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
@abstractmethod
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> TradeList:
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> DataFrame:
"""
Load a pair from file, either .json.gz or .json
:param pair: Load trades for this pair
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
:return: Dataframe containing trades
"""
def trades_store(self, pair: str, data: DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
# Filter on expected columns (will remove the actual date column).
self._trades_store(pair, data[DEFAULT_TRADES_COLUMNS])
def trades_purge(self, pair: str) -> bool:
"""
Remove data for this pair
@ -208,7 +219,7 @@ class IDataHandler(ABC):
return True
return False
def trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> TradeList:
def trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> DataFrame:
"""
Load a pair from file, either .json.gz or .json
Removes duplicates in the process.
@ -216,7 +227,10 @@ class IDataHandler(ABC):
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
"""
return trades_remove_duplicates(self._trades_load(pair, timerange=timerange))
trades = trades_df_remove_duplicates(self._trades_load(pair, timerange=timerange))
trades = trades_convert_types(trades)
return trades
@classmethod
def create_dir_if_needed(cls, datadir: Path):

View File

@ -6,8 +6,8 @@ from pandas import DataFrame, read_json, to_datetime
from freqtrade import misc
from freqtrade.configuration import TimeRange
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, TradeList
from freqtrade.data.converter import trades_dict_to_list
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS
from freqtrade.data.converter import trades_dict_to_list, trades_list_to_df
from freqtrade.enums import CandleType
from .idatahandler import IDataHandler
@ -94,45 +94,46 @@ class JsonDataHandler(IDataHandler):
"""
raise NotImplementedError()
def trades_store(self, pair: str, data: TradeList) -> None:
def _trades_store(self, pair: str, data: DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
filename = self._pair_trades_filename(self._datadir, pair)
misc.file_dump_json(filename, data, is_zip=self._use_zip)
trades = data.values.tolist()
misc.file_dump_json(filename, trades, is_zip=self._use_zip)
def trades_append(self, pair: str, data: TradeList):
def trades_append(self, pair: str, data: DataFrame):
"""
Append data to existing files
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
raise NotImplementedError()
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> TradeList:
def _trades_load(self, pair: str, timerange: Optional[TimeRange] = None) -> DataFrame:
"""
Load a pair from file, either .json.gz or .json
# TODO: respect timerange ...
:param pair: Load trades for this pair
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
:return: Dataframe containing trades
"""
filename = self._pair_trades_filename(self._datadir, pair)
tradesdata = misc.file_load_json(filename)
if not tradesdata:
return []
return DataFrame(columns=DEFAULT_TRADES_COLUMNS)
if isinstance(tradesdata[0], dict):
# Convert trades dict to list
logger.info("Old trades format detected - converting")
tradesdata = trades_dict_to_list(tradesdata)
pass
return tradesdata
return trades_list_to_df(tradesdata, convert=False)
@classmethod
def _get_file_extension(cls):

View File

@ -4,7 +4,7 @@ from typing import Optional
from pandas import DataFrame, read_parquet, to_datetime
from freqtrade.configuration import TimeRange
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, TradeList
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS, DEFAULT_TRADES_COLUMNS, TradeList
from freqtrade.enums import CandleType
from .idatahandler import IDataHandler
@ -81,25 +81,22 @@ class ParquetDataHandler(IDataHandler):
"""
raise NotImplementedError()
def trades_store(self, pair: str, data: TradeList) -> None:
def _trades_store(self, pair: str, data: DataFrame) -> None:
"""
Store trades data (list of Dicts) to file
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
# filename = self._pair_trades_filename(self._datadir, pair)
filename = self._pair_trades_filename(self._datadir, pair)
self.create_dir_if_needed(filename)
data.reset_index(drop=True).to_parquet(filename)
raise NotImplementedError()
# array = pa.array(data)
# array
# feather.write_feather(data, filename)
def trades_append(self, pair: str, data: TradeList):
def trades_append(self, pair: str, data: DataFrame):
"""
Append data to existing files
:param pair: Pair - used for filename
:param data: List of Lists containing trade data,
:param data: Dataframe containing trades
column sequence as in DEFAULT_TRADES_COLUMNS
"""
raise NotImplementedError()
@ -112,14 +109,13 @@ class ParquetDataHandler(IDataHandler):
:param timerange: Timerange to load trades for - currently not implemented
:return: List of trades
"""
raise NotImplementedError()
# filename = self._pair_trades_filename(self._datadir, pair)
# tradesdata = misc.file_load_json(filename)
filename = self._pair_trades_filename(self._datadir, pair)
if not filename.exists():
return DataFrame(columns=DEFAULT_TRADES_COLUMNS)
# if not tradesdata:
# return []
tradesdata = read_parquet(filename)
# return tradesdata
return tradesdata
@classmethod
def _get_file_extension(cls):

File diff suppressed because it is too large Load Diff

View File

@ -5,6 +5,7 @@ Cryptocurrency Exchanges support
import asyncio
import inspect
import logging
import signal
from copy import deepcopy
from datetime import datetime, timedelta, timezone
from math import floor
@ -2151,7 +2152,7 @@ class Exchange:
except IndexError:
logger.exception("Error loading %s. Result was %s.", pair, data)
return pair, timeframe, candle_type, [], self._ohlcv_partial_candle
logger.debug("Done fetching pair %s, interval %s ...", pair, timeframe)
logger.debug("Done fetching pair %s, %s interval %s...", pair, candle_type, timeframe)
return pair, timeframe, candle_type, data, self._ohlcv_partial_candle
except ccxt.NotSupported as e:
@ -2253,6 +2254,7 @@ class Exchange:
from_id = t[-1][1]
trades.extend(t[:-1])
while True:
try:
t = await self._async_fetch_trades(pair,
params={self._trades_pagination_arg: from_id})
if t:
@ -2268,6 +2270,9 @@ class Exchange:
from_id = t[-1][1]
else:
break
except asyncio.CancelledError:
logger.debug("Async operation Interrupted, breaking trades DL loop.")
break
return (pair, trades)
@ -2286,6 +2291,7 @@ class Exchange:
# DEFAULT_TRADES_COLUMNS: 0 -> timestamp
# DEFAULT_TRADES_COLUMNS: 1 -> id
while True:
try:
t = await self._async_fetch_trades(pair, since=since)
if t:
since = t[-1][0]
@ -2297,6 +2303,9 @@ class Exchange:
break
else:
break
except asyncio.CancelledError:
logger.debug("Async operation Interrupted, breaking trades DL loop.")
break
return (pair, trades)
@ -2344,9 +2353,16 @@ class Exchange:
raise OperationalException("This exchange does not support downloading Trades.")
with self._loop_lock:
return self.loop.run_until_complete(
self._async_get_trade_history(pair=pair, since=since,
until=until, from_id=from_id))
task = asyncio.ensure_future(self._async_get_trade_history(
pair=pair, since=since, until=until, from_id=from_id))
for sig in [signal.SIGINT, signal.SIGTERM]:
try:
self.loop.add_signal_handler(sig, task.cancel)
except NotImplementedError:
# Not all platforms implement signals (e.g. windows)
pass
return self.loop.run_until_complete(task)
@retrier
def _get_funding_fees_from_exchange(self, pair: str, since: Union[datetime, int]) -> float:

View File

@ -11,6 +11,8 @@ from gymnasium import spaces
from gymnasium.utils import seeding
from pandas import DataFrame
from freqtrade.exceptions import OperationalException
logger = logging.getLogger(__name__)
@ -80,8 +82,9 @@ class BaseEnvironment(gym.Env):
self.can_short: bool = can_short
self.live: bool = live
if not self.live and self.add_state_info:
self.add_state_info = False
logger.warning("add_state_info is not available in backtesting. Deactivating.")
raise OperationalException("`add_state_info` is not available in backtesting. Change "
"parameter to false in your rl_config. See `add_state_info` "
"docs for more info.")
self.seed(seed)
self.reset_env(df, prices, window_size, reward_kwargs, starting_point)

View File

@ -26,9 +26,9 @@ class PyTorchMLPClassifier(BasePyTorchClassifier):
"model_training_parameters" : {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 5000,
"n_steps": 5000,
"batch_size": 64,
"max_n_eval_batches": null,
"n_epochs": null,
},
"model_kwargs": {
"hidden_dim": 512,

View File

@ -27,9 +27,9 @@ class PyTorchMLPRegressor(BasePyTorchRegressor):
"model_training_parameters" : {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 5000,
"n_steps": 5000,
"batch_size": 64,
"max_n_eval_batches": null,
"n_epochs": null,
},
"model_kwargs": {
"hidden_dim": 512,

View File

@ -30,9 +30,9 @@ class PyTorchTransformerRegressor(BasePyTorchRegressor):
"model_training_parameters" : {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 5000,
"n_steps": 5000,
"batch_size": 64,
"max_n_eval_batches": null
"n_epochs": null
},
"model_kwargs": {
"hidden_dim": 512,

View File

@ -1,5 +1,4 @@
from abc import ABC, abstractmethod
from typing import Optional
import pandas as pd
import torch
@ -12,14 +11,14 @@ class PyTorchDataConvertor(ABC):
"""
@abstractmethod
def convert_x(self, df: pd.DataFrame, device: Optional[str] = None) -> torch.Tensor:
def convert_x(self, df: pd.DataFrame, device: str) -> torch.Tensor:
"""
:param df: "*_features" dataframe.
:param device: The device to use for training (e.g. 'cpu', 'cuda').
"""
@abstractmethod
def convert_y(self, df: pd.DataFrame, device: Optional[str] = None) -> torch.Tensor:
def convert_y(self, df: pd.DataFrame, device: str) -> torch.Tensor:
"""
:param df: "*_labels" dataframe.
:param device: The device to use for training (e.g. 'cpu', 'cuda').
@ -33,8 +32,8 @@ class DefaultPyTorchDataConvertor(PyTorchDataConvertor):
def __init__(
self,
target_tensor_type: Optional[torch.dtype] = None,
squeeze_target_tensor: bool = False
target_tensor_type: torch.dtype = torch.float32,
squeeze_target_tensor: bool = False,
):
"""
:param target_tensor_type: type of target tensor, for classification use
@ -45,23 +44,14 @@ class DefaultPyTorchDataConvertor(PyTorchDataConvertor):
self._target_tensor_type = target_tensor_type
self._squeeze_target_tensor = squeeze_target_tensor
def convert_x(self, df: pd.DataFrame, device: Optional[str] = None) -> torch.Tensor:
x = torch.from_numpy(df.values).float()
if device:
x = x.to(device)
def convert_x(self, df: pd.DataFrame, device: str) -> torch.Tensor:
numpy_arrays = df.values
x = torch.tensor(numpy_arrays, device=device, dtype=torch.float32)
return x
def convert_y(self, df: pd.DataFrame, device: Optional[str] = None) -> torch.Tensor:
y = torch.from_numpy(df.values)
if self._target_tensor_type:
y = y.to(self._target_tensor_type)
def convert_y(self, df: pd.DataFrame, device: str) -> torch.Tensor:
numpy_arrays = df.values
y = torch.tensor(numpy_arrays, device=device, dtype=self._target_tensor_type)
if self._squeeze_target_tensor:
y = y.squeeze()
if device:
y = y.to(device)
return y

View File

@ -1,5 +1,4 @@
import logging
import math
from pathlib import Path
from typing import Any, Dict, List, Optional
@ -40,23 +39,27 @@ class PyTorchModelTrainer(PyTorchTrainerInterface):
state_dict and model_meta_data saved by self.save() method.
:param model_meta_data: Additional metadata about the model (optional).
:param data_convertor: convertor from pd.DataFrame to torch.tensor.
:param max_iters: The number of training iterations to run.
iteration here refers to the number of times we call
self.optimizer.step(). used to calculate n_epochs.
:param n_steps: used to calculate n_epochs. The number of training iterations to run.
iteration here refers to the number of times optimizer.step() is called.
ignored if n_epochs is set.
:param n_epochs: The maximum number batches to use for evaluation.
:param batch_size: The size of the batches to use during training.
:param max_n_eval_batches: The maximum number batches to use for evaluation.
"""
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.model_meta_data = model_meta_data
self.device = device
self.max_iters: int = kwargs.get("max_iters", 100)
self.n_epochs: Optional[int] = kwargs.get("n_epochs", 10)
self.n_steps: Optional[int] = kwargs.get("n_steps", None)
if self.n_steps is None and not self.n_epochs:
raise Exception("Either `n_steps` or `n_epochs` should be set.")
self.batch_size: int = kwargs.get("batch_size", 64)
self.max_n_eval_batches: Optional[int] = kwargs.get("max_n_eval_batches", None)
self.data_convertor = data_convertor
self.window_size: int = window_size
self.tb_logger = tb_logger
self.test_batch_counter = 0
def fit(self, data_dictionary: Dict[str, pd.DataFrame], splits: List[str]):
"""
@ -72,55 +75,46 @@ class PyTorchModelTrainer(PyTorchTrainerInterface):
backpropagation.
- Updates the model's parameters using an optimizer.
"""
data_loaders_dictionary = self.create_data_loaders_dictionary(data_dictionary, splits)
epochs = self.calc_n_epochs(
n_obs=len(data_dictionary["train_features"]),
batch_size=self.batch_size,
n_iters=self.max_iters
)
self.model.train()
for epoch in range(1, epochs + 1):
for i, batch_data in enumerate(data_loaders_dictionary["train"]):
data_loaders_dictionary = self.create_data_loaders_dictionary(data_dictionary, splits)
n_obs = len(data_dictionary["train_features"])
n_epochs = self.n_epochs or self.calc_n_epochs(n_obs=n_obs)
batch_counter = 0
for _ in range(n_epochs):
for _, batch_data in enumerate(data_loaders_dictionary["train"]):
xb, yb = batch_data
xb.to(self.device)
yb.to(self.device)
xb = xb.to(self.device)
yb = yb.to(self.device)
yb_pred = self.model(xb)
loss = self.criterion(yb_pred, yb)
self.optimizer.zero_grad(set_to_none=True)
loss.backward()
self.optimizer.step()
self.tb_logger.log_scalar("train_loss", loss.item(), i)
self.tb_logger.log_scalar("train_loss", loss.item(), batch_counter)
batch_counter += 1
# evaluation
if "test" in splits:
self.estimate_loss(
data_loaders_dictionary,
self.max_n_eval_batches,
"test"
)
self.estimate_loss(data_loaders_dictionary, "test")
@torch.no_grad()
def estimate_loss(
self,
data_loader_dictionary: Dict[str, DataLoader],
max_n_eval_batches: Optional[int],
split: str,
) -> None:
self.model.eval()
n_batches = 0
for i, batch_data in enumerate(data_loader_dictionary[split]):
if max_n_eval_batches and i > max_n_eval_batches:
n_batches += 1
break
for _, batch_data in enumerate(data_loader_dictionary[split]):
xb, yb = batch_data
xb.to(self.device)
yb.to(self.device)
xb = xb.to(self.device)
yb = yb.to(self.device)
yb_pred = self.model(xb)
loss = self.criterion(yb_pred, yb)
self.tb_logger.log_scalar(f"{split}_loss", loss.item(), i)
self.tb_logger.log_scalar(f"{split}_loss", loss.item(), self.test_batch_counter)
self.test_batch_counter += 1
self.model.train()
@ -148,31 +142,30 @@ class PyTorchModelTrainer(PyTorchTrainerInterface):
return data_loader_dictionary
@staticmethod
def calc_n_epochs(n_obs: int, batch_size: int, n_iters: int) -> int:
def calc_n_epochs(self, n_obs: int) -> int:
"""
Calculates the number of epochs required to reach the maximum number
of iterations specified in the model training parameters.
the motivation here is that `max_iters` is easier to optimize and keep stable,
the motivation here is that `n_steps` is easier to optimize and keep stable,
across different n_obs - the number of data points.
"""
assert isinstance(self.n_steps, int), "Either `n_steps` or `n_epochs` should be set."
n_batches = n_obs // self.batch_size
n_epochs = min(self.n_steps // n_batches, 1)
if n_epochs <= 10:
logger.warning(
f"Setting low n_epochs: {n_epochs}. "
f"Please consider increasing `n_steps` hyper-parameter."
)
n_batches = math.ceil(n_obs // batch_size)
epochs = math.ceil(n_iters // n_batches)
if epochs <= 10:
logger.warning("User set `max_iters` in such a way that the trainer will only perform "
f" {epochs} epochs. Please consider increasing this value accordingly")
if epochs <= 1:
logger.warning("Epochs set to 1. Please review your `max_iters` value")
epochs = 1
return epochs
return n_epochs
def save(self, path: Path):
"""
- Saving any nn.Module state_dict
- Saving model_meta_data, this dict should contain any additional data that the
user needs to store. e.g class_names for classification models.
user needs to store. e.g. class_names for classification models.
"""
torch.save({

View File

@ -192,30 +192,6 @@ def plural(num: float, singular: str, plural: Optional[str] = None) -> str:
return singular if (num == 1 or num == -1) else plural or singular + 's'
def render_template(templatefile: str, arguments: dict = {}) -> str:
from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
loader=PackageLoader('freqtrade', 'templates'),
autoescape=select_autoescape(['html', 'xml'])
)
template = env.get_template(templatefile)
return template.render(**arguments)
def render_template_with_fallback(templatefile: str, templatefallbackfile: str,
arguments: dict = {}) -> str:
"""
Use templatefile if possible, otherwise fall back to templatefallbackfile
"""
from jinja2.exceptions import TemplateNotFound
try:
return render_template(templatefile, arguments)
except TemplateNotFound:
return render_template(templatefallbackfile, arguments)
def chunks(lst: List[Any], n: int) -> Iterator[List[Any]]:
"""
Split lst into chunks of the size n.

View File

@ -369,13 +369,14 @@ class Backtesting:
# Cleanup from prior runs
pair_data.drop(HEADERS[5:] + ['buy', 'sell'], axis=1, errors='ignore')
df_analyzed = self.strategy.ft_advise_signals(pair_data, {'pair': pair})
# Trim startup period from analyzed dataframe
df_analyzed = processed[pair] = pair_data = trim_dataframe(
df_analyzed, self.timerange, startup_candles=self.required_startup)
# Update dataprovider cache
self.dataprovider._set_cached_df(
pair, self.timeframe, df_analyzed, self.config['candle_type_def'])
# Trim startup period from analyzed dataframe
df_analyzed = processed[pair] = pair_data = trim_dataframe(
df_analyzed, self.timerange, startup_candles=self.required_startup)
# Create a copy of the dataframe before shifting, that way the entry signal/tag
# remains on the correct candle for callbacks.
df_analyzed = df_analyzed.copy()
@ -1204,7 +1205,8 @@ class Backtesting:
row_index += 1
indexes[pair] = row_index
self.dataprovider._set_dataframe_max_index(row_index)
self.dataprovider._set_dataframe_max_index(self.required_startup + row_index)
self.dataprovider._set_dataframe_max_date(current_time)
current_detail_time: datetime = row[DATE_IDX].to_pydatetime()
trade_dir: Optional[LongShort] = self.check_for_trade_entry(row)
@ -1237,12 +1239,14 @@ class Backtesting:
is_first = True
current_time_det = current_time
for det_row in detail_data[HEADERS].values.tolist():
self.dataprovider._set_dataframe_max_date(current_time_det)
open_trade_count_start = self.backtest_loop(
det_row, pair, current_time_det, end_date,
open_trade_count_start, trade_dir, is_first)
current_time_det += timedelta(minutes=self.timeframe_detail_min)
is_first = False
else:
self.dataprovider._set_dataframe_max_date(current_time)
open_trade_count_start = self.backtest_loop(
row, pair, current_time, end_date,
open_trade_count_start, trade_dir)

View File

@ -1043,6 +1043,7 @@ class LocalTrade:
def select_filled_orders(self, order_side: Optional[str] = None) -> List['Order']:
"""
Finds filled orders for this order side.
Will not return open orders which already partially filled.
:param order_side: Side of the order (either 'buy', 'sell', or None)
:return: array of Order objects
"""

View File

@ -42,7 +42,7 @@ class IProtection(LoggingMixin, ABC):
self._stop_duration = (tf_in_min * self._stop_duration_candles)
else:
self._stop_duration_candles = None
self._stop_duration = protection_config.get('stop_duration', 60)
self._stop_duration = int(protection_config.get('stop_duration', 60))
if 'lookback_period_candles' in protection_config:
self._lookback_period_candles = int(protection_config.get('lookback_period_candles', 1))
self._lookback_period = tf_in_min * self._lookback_period_candles

View File

@ -1,7 +1,7 @@
from datetime import date, datetime
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict, RootModel, SerializeAsAny
from freqtrade.constants import DATETIME_PRINT_FORMAT, IntOrInf
from freqtrade.enums import MarginMode, OrderTypeValues, SignalDirection, TradingMode
@ -9,9 +9,9 @@ from freqtrade.types import ValidExchangesType
class ExchangeModePayloadMixin(BaseModel):
trading_mode: Optional[TradingMode]
margin_mode: Optional[MarginMode]
exchange: Optional[str]
trading_mode: Optional[TradingMode] = None
margin_mode: Optional[MarginMode] = None
exchange: Optional[str] = None
class Ping(BaseModel):
@ -43,11 +43,11 @@ class BackgroundTaskStatus(BaseModel):
job_category: str
status: str
running: bool
progress: Optional[float]
progress: Optional[float] = None
class BackgroundTaskResult(BaseModel):
error: Optional[str]
error: Optional[str] = None
status: str
@ -60,9 +60,9 @@ class Balance(BaseModel):
free: float
balance: float
used: float
bot_owned: Optional[float]
bot_owned: Optional[float] = None
est_stake: float
est_stake_bot: Optional[float]
est_stake_bot: Optional[float] = None
stake: str
# Starting with 2.x
side: str
@ -141,7 +141,7 @@ class Profit(BaseModel):
expectancy_ratio: float
max_drawdown: float
max_drawdown_abs: float
trading_volume: Optional[float]
trading_volume: Optional[float] = None
bot_start_timestamp: int
bot_start_date: str
@ -173,50 +173,50 @@ class Daily(BaseModel):
class UnfilledTimeout(BaseModel):
entry: Optional[int]
exit: Optional[int]
unit: Optional[str]
exit_timeout_count: Optional[int]
entry: Optional[int] = None
exit: Optional[int] = None
unit: Optional[str] = None
exit_timeout_count: Optional[int] = None
class OrderTypes(BaseModel):
entry: OrderTypeValues
exit: OrderTypeValues
emergency_exit: Optional[OrderTypeValues]
force_exit: Optional[OrderTypeValues]
force_entry: Optional[OrderTypeValues]
emergency_exit: Optional[OrderTypeValues] = None
force_exit: Optional[OrderTypeValues] = None
force_entry: Optional[OrderTypeValues] = None
stoploss: OrderTypeValues
stoploss_on_exchange: bool
stoploss_on_exchange_interval: Optional[int]
stoploss_on_exchange_interval: Optional[int] = None
class ShowConfig(BaseModel):
version: str
strategy_version: Optional[str]
strategy_version: Optional[str] = None
api_version: float
dry_run: bool
trading_mode: str
short_allowed: bool
stake_currency: str
stake_amount: str
available_capital: Optional[float]
available_capital: Optional[float] = None
stake_currency_decimals: int
max_open_trades: IntOrInf
minimal_roi: Dict[str, Any]
stoploss: Optional[float]
stoploss: Optional[float] = None
stoploss_on_exchange: bool
trailing_stop: Optional[bool]
trailing_stop_positive: Optional[float]
trailing_stop_positive_offset: Optional[float]
trailing_only_offset_is_reached: Optional[bool]
unfilledtimeout: Optional[UnfilledTimeout] # Empty in webserver mode
order_types: Optional[OrderTypes]
use_custom_stoploss: Optional[bool]
timeframe: Optional[str]
trailing_stop: Optional[bool] = None
trailing_stop_positive: Optional[float] = None
trailing_stop_positive_offset: Optional[float] = None
trailing_only_offset_is_reached: Optional[bool] = None
unfilledtimeout: Optional[UnfilledTimeout] = None # Empty in webserver mode
order_types: Optional[OrderTypes] = None
use_custom_stoploss: Optional[bool] = None
timeframe: Optional[str] = None
timeframe_ms: int
timeframe_min: int
exchange: str
strategy: Optional[str]
strategy: Optional[str] = None
force_entry_enable: bool
exit_pricing: Dict[str, Any]
entry_pricing: Dict[str, Any]
@ -231,17 +231,17 @@ class OrderSchema(BaseModel):
pair: str
order_id: str
status: str
remaining: Optional[float]
remaining: Optional[float] = None
amount: float
safe_price: float
cost: float
filled: Optional[float]
filled: Optional[float] = None
ft_order_side: str
order_type: str
is_open: bool
order_timestamp: Optional[int]
order_filled_timestamp: Optional[int]
ft_fee_base: Optional[float]
order_timestamp: Optional[int] = None
order_filled_timestamp: Optional[int] = None
ft_fee_base: Optional[float] = None
class TradeSchema(BaseModel):
@ -255,81 +255,81 @@ class TradeSchema(BaseModel):
amount: float
amount_requested: float
stake_amount: float
max_stake_amount: Optional[float]
max_stake_amount: Optional[float] = None
strategy: str
enter_tag: Optional[str]
enter_tag: Optional[str] = None
timeframe: int
fee_open: Optional[float]
fee_open_cost: Optional[float]
fee_open_currency: Optional[str]
fee_close: Optional[float]
fee_close_cost: Optional[float]
fee_close_currency: Optional[str]
fee_open: Optional[float] = None
fee_open_cost: Optional[float] = None
fee_open_currency: Optional[str] = None
fee_close: Optional[float] = None
fee_close_cost: Optional[float] = None
fee_close_currency: Optional[str] = None
open_date: str
open_timestamp: int
open_rate: float
open_rate_requested: Optional[float]
open_rate_requested: Optional[float] = None
open_trade_value: float
close_date: Optional[str]
close_timestamp: Optional[int]
close_rate: Optional[float]
close_rate_requested: Optional[float]
close_date: Optional[str] = None
close_timestamp: Optional[int] = None
close_rate: Optional[float] = None
close_rate_requested: Optional[float] = None
close_profit: Optional[float]
close_profit_pct: Optional[float]
close_profit_abs: Optional[float]
close_profit: Optional[float] = None
close_profit_pct: Optional[float] = None
close_profit_abs: Optional[float] = None
profit_ratio: Optional[float]
profit_pct: Optional[float]
profit_abs: Optional[float]
profit_fiat: Optional[float]
profit_ratio: Optional[float] = None
profit_pct: Optional[float] = None
profit_abs: Optional[float] = None
profit_fiat: Optional[float] = None
realized_profit: float
realized_profit_ratio: Optional[float]
realized_profit_ratio: Optional[float] = None
exit_reason: Optional[str]
exit_order_status: Optional[str]
exit_reason: Optional[str] = None
exit_order_status: Optional[str] = None
stop_loss_abs: Optional[float]
stop_loss_ratio: Optional[float]
stop_loss_pct: Optional[float]
stoploss_order_id: Optional[str]
stoploss_last_update: Optional[str]
stoploss_last_update_timestamp: Optional[int]
initial_stop_loss_abs: Optional[float]
initial_stop_loss_ratio: Optional[float]
initial_stop_loss_pct: Optional[float]
stop_loss_abs: Optional[float] = None
stop_loss_ratio: Optional[float] = None
stop_loss_pct: Optional[float] = None
stoploss_order_id: Optional[str] = None
stoploss_last_update: Optional[str] = None
stoploss_last_update_timestamp: Optional[int] = None
initial_stop_loss_abs: Optional[float] = None
initial_stop_loss_ratio: Optional[float] = None
initial_stop_loss_pct: Optional[float] = None
min_rate: Optional[float]
max_rate: Optional[float]
open_order_id: Optional[str]
min_rate: Optional[float] = None
max_rate: Optional[float] = None
open_order_id: Optional[str] = None
orders: List[OrderSchema]
leverage: Optional[float]
interest_rate: Optional[float]
liquidation_price: Optional[float]
funding_fees: Optional[float]
trading_mode: Optional[TradingMode]
leverage: Optional[float] = None
interest_rate: Optional[float] = None
liquidation_price: Optional[float] = None
funding_fees: Optional[float] = None
trading_mode: Optional[TradingMode] = None
amount_precision: Optional[float]
price_precision: Optional[float]
precision_mode: Optional[int]
amount_precision: Optional[float] = None
price_precision: Optional[float] = None
precision_mode: Optional[int] = None
class OpenTradeSchema(TradeSchema):
stoploss_current_dist: Optional[float]
stoploss_current_dist_pct: Optional[float]
stoploss_current_dist_ratio: Optional[float]
stoploss_entry_dist: Optional[float]
stoploss_entry_dist_ratio: Optional[float]
stoploss_current_dist: Optional[float] = None
stoploss_current_dist_pct: Optional[float] = None
stoploss_current_dist_ratio: Optional[float] = None
stoploss_entry_dist: Optional[float] = None
stoploss_entry_dist_ratio: Optional[float] = None
current_rate: float
total_profit_abs: float
total_profit_fiat: Optional[float]
total_profit_ratio: Optional[float]
total_profit_fiat: Optional[float] = None
total_profit_ratio: Optional[float] = None
open_order: Optional[str]
open_order: Optional[str] = None
class TradeResponse(BaseModel):
@ -339,8 +339,7 @@ class TradeResponse(BaseModel):
total_trades: int
class ForceEnterResponse(BaseModel):
__root__: Union[TradeSchema, StatusMsg]
ForceEnterResponse = RootModel[Union[TradeSchema, StatusMsg]]
class LockModel(BaseModel):
@ -352,7 +351,7 @@ class LockModel(BaseModel):
lock_timestamp: int
pair: str
side: str
reason: Optional[str]
reason: Optional[str] = None
class Locks(BaseModel):
@ -361,8 +360,8 @@ class Locks(BaseModel):
class DeleteLockRequest(BaseModel):
pair: Optional[str]
lockid: Optional[int]
pair: Optional[str] = None
lockid: Optional[int] = None
class Logs(BaseModel):
@ -373,17 +372,17 @@ class Logs(BaseModel):
class ForceEnterPayload(BaseModel):
pair: str
side: SignalDirection = SignalDirection.LONG
price: Optional[float]
ordertype: Optional[OrderTypeValues]
stakeamount: Optional[float]
entry_tag: Optional[str]
leverage: Optional[float]
price: Optional[float] = None
ordertype: Optional[OrderTypeValues] = None
stakeamount: Optional[float] = None
entry_tag: Optional[str] = None
leverage: Optional[float] = None
class ForceExitPayload(BaseModel):
tradeid: str
ordertype: Optional[OrderTypeValues]
amount: Optional[float]
ordertype: Optional[OrderTypeValues] = None
amount: Optional[float] = None
class BlacklistPayload(BaseModel):
@ -405,7 +404,7 @@ class WhitelistResponse(BaseModel):
class WhitelistEvaluateResponse(BackgroundTaskResult):
result: Optional[WhitelistResponse]
result: Optional[WhitelistResponse] = None
class DeleteTrade(BaseModel):
@ -420,8 +419,7 @@ class PlotConfig_(BaseModel):
subplots: Dict[str, Any]
class PlotConfig(BaseModel):
__root__: Union[PlotConfig_, Dict]
PlotConfig = RootModel[Union[PlotConfig_, Dict]]
class StrategyListResponse(BaseModel):
@ -470,7 +468,7 @@ class PairHistory(BaseModel):
timeframe: str
timeframe_ms: int
columns: List[str]
data: List[Any]
data: SerializeAsAny[List[Any]]
length: int
buy_signals: int
sell_signals: int
@ -484,11 +482,11 @@ class PairHistory(BaseModel):
data_start: str
data_stop: str
data_stop_ts: int
class Config:
json_encoders = {
# TODO[pydantic]: The following keys were removed: `json_encoders`.
# Check https://docs.pydantic.dev/dev-v2/migration/#changes-to-config for more information.
model_config = ConfigDict(json_encoders={
datetime: lambda v: v.strftime(DATETIME_PRINT_FORMAT),
}
})
class BacktestFreqAIInputs(BaseModel):
@ -497,16 +495,16 @@ class BacktestFreqAIInputs(BaseModel):
class BacktestRequest(BaseModel):
strategy: str
timeframe: Optional[str]
timeframe_detail: Optional[str]
timerange: Optional[str]
max_open_trades: Optional[IntOrInf]
stake_amount: Optional[str]
timeframe: Optional[str] = None
timeframe_detail: Optional[str] = None
timerange: Optional[str] = None
max_open_trades: Optional[IntOrInf] = None
stake_amount: Optional[Union[str, float]] = None
enable_protections: bool
dry_run_wallet: Optional[float]
backtest_cache: Optional[str]
freqaimodel: Optional[str]
freqai: Optional[BacktestFreqAIInputs]
dry_run_wallet: Optional[float] = None
backtest_cache: Optional[str] = None
freqaimodel: Optional[str] = None
freqai: Optional[BacktestFreqAIInputs] = None
class BacktestResponse(BaseModel):
@ -515,9 +513,9 @@ class BacktestResponse(BaseModel):
status_msg: str
step: str
progress: float
trade_count: Optional[float]
trade_count: Optional[float] = None
# TODO: Properly type backtestresult...
backtest_result: Optional[Dict[str, Any]]
backtest_result: Optional[Dict[str, Any]] = None
# TODO: This is a copy of BacktestHistoryEntryType
@ -540,5 +538,5 @@ class SysInfo(BaseModel):
class Health(BaseModel):
last_process: Optional[datetime]
last_process_ts: Optional[int]
last_process: Optional[datetime] = None
last_process_ts: Optional[int] = None

View File

@ -175,9 +175,9 @@ def force_entry(payload: ForceEnterPayload, rpc: RPC = Depends(get_rpc)):
leverage=payload.leverage)
if trade:
return ForceEnterResponse.parse_obj(trade.to_json())
return ForceEnterResponse.model_validate(trade.to_json())
else:
return ForceEnterResponse.parse_obj(
return ForceEnterResponse.model_validate(
{"status": f"Error entering {payload.side} trade for pair {payload.pair}."})
@ -282,14 +282,14 @@ def plot_config(strategy: Optional[str] = None, config=Depends(get_config),
if not strategy:
if not rpc:
raise RPCException("Strategy is mandatory in webserver mode.")
return PlotConfig.parse_obj(rpc._rpc_plot_config())
return PlotConfig.model_validate(rpc._rpc_plot_config())
else:
config1 = deepcopy(config)
config1.update({
'strategy': strategy
})
try:
return PlotConfig.parse_obj(RPC._rpc_plot_config_with_strategy(config1))
return PlotConfig.model_validate(RPC._rpc_plot_config_with_strategy(config1))
except Exception as e:
raise HTTPException(status_code=502, detail=str(e))

View File

@ -65,7 +65,7 @@ async def _process_consumer_request(
"""
# Validate the request, makes sure it matches the schema
try:
websocket_request = WSRequestSchema.parse_obj(request)
websocket_request = WSRequestSchema.model_validate(request)
except ValidationError as e:
logger.error(f"Invalid request from {channel}: {e}")
return
@ -94,7 +94,7 @@ async def _process_consumer_request(
# Format response
response = WSWhitelistMessage(data=whitelist)
await channel.send(response.dict(exclude_none=True))
await channel.send(response.model_dump(exclude_none=True))
elif type_ == RPCRequestType.ANALYZED_DF:
# Limit the amount of candles per dataframe to 'limit' or 1500
@ -105,7 +105,7 @@ async def _process_consumer_request(
for message in rpc._ws_request_analyzed_df(limit, pair):
# Format response
response = WSAnalyzedDFMessage(data=message)
await channel.send(response.dict(exclude_none=True))
await channel.send(response.model_dump(exclude_none=True))
@router.websocket("/message/ws")

View File

@ -2,15 +2,14 @@ from datetime import datetime
from typing import Any, Dict, List, Optional, TypedDict
from pandas import DataFrame
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from freqtrade.constants import PairWithTimeframe
from freqtrade.enums.rpcmessagetype import RPCMessageType, RPCRequestType
class BaseArbitraryModel(BaseModel):
class Config:
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
class WSRequestSchema(BaseArbitraryModel):
@ -27,9 +26,7 @@ class WSMessageSchemaType(TypedDict):
class WSMessageSchema(BaseArbitraryModel):
type: RPCMessageType
data: Optional[Any] = None
class Config:
extra = 'allow'
model_config = ConfigDict(extra='allow')
# ------------------------------ REQUEST SCHEMAS ----------------------------

View File

@ -41,7 +41,7 @@ logger = logging.getLogger(__name__)
def schema_to_dict(schema: Union[WSMessageSchema, WSRequestSchema]):
return schema.dict(exclude_none=True)
return schema.model_dump(exclude_none=True)
class ExternalMessageConsumer:
@ -322,7 +322,7 @@ class ExternalMessageConsumer:
producer_name = producer.get('name', 'default')
try:
producer_message = WSMessageSchema.parse_obj(message)
producer_message = WSMessageSchema.model_validate(message)
except ValidationError as e:
logger.error(f"Invalid message from `{producer_name}`: {e}")
return
@ -344,7 +344,7 @@ class ExternalMessageConsumer:
def _consume_whitelist_message(self, producer_name: str, message: WSMessageSchema):
try:
# Validate the message
whitelist_message = WSWhitelistMessage.parse_obj(message)
whitelist_message = WSWhitelistMessage.model_validate(message.model_dump())
except ValidationError as e:
logger.error(f"Invalid message from `{producer_name}`: {e}")
return
@ -356,7 +356,7 @@ class ExternalMessageConsumer:
def _consume_analyzed_df_message(self, producer_name: str, message: WSMessageSchema):
try:
df_message = WSAnalyzedDFMessage.parse_obj(message)
df_message = WSAnalyzedDFMessage.model_validate(message.model_dump())
except ValidationError as e:
logger.error(f"Invalid message from `{producer_name}`: {e}")
return

View File

@ -26,6 +26,7 @@ coingecko_mapping = {
'sol': 'solana',
'usdt': 'tether',
'busd': 'binance-usd',
'tusd': 'true-usd',
}

View File

@ -78,19 +78,7 @@ class {{ strategy }}(IStrategy):
buy_rsi = IntParameter(10, 40, default=30, space="buy")
sell_rsi = IntParameter(60, 90, default=70, space="sell")
# Optional order type mapping.
order_types = {
'entry': 'limit',
'exit': 'limit',
'stoploss': 'market',
'stoploss_on_exchange': False
}
# Optional order time in force.
order_time_in_force = {
'entry': 'GTC',
'exit': 'GTC'
}
{{ attributes | indent(4) }}
{{ plot_config | indent(4) }}
def informative_pairs(self):

View File

@ -0,0 +1,13 @@
# Optional order type mapping.
order_types = {
'entry': 'limit',
'exit': 'limit',
'stoploss': 'market',
'stoploss_on_exchange': False
}
# Optional order time in force.
order_time_in_force = {
'entry': 'GTC',
'exit': 'GTC'
}

View File

@ -2,6 +2,7 @@ from freqtrade.util.datetime_helpers import (dt_floor_day, dt_from_ts, dt_humani
dt_utc, format_ms_time, shorten_date)
from freqtrade.util.ft_precise import FtPrecise
from freqtrade.util.periodic_cache import PeriodicCache
from freqtrade.util.template_renderer import render_template, render_template_with_fallback # noqa
__all__ = [

View File

@ -0,0 +1,27 @@
"""
Jinja2 rendering utils, used to generate new strategy and configurations.
"""
def render_template(templatefile: str, arguments: dict = {}) -> str:
from jinja2 import Environment, PackageLoader, select_autoescape
env = Environment(
loader=PackageLoader('freqtrade', 'templates'),
autoescape=select_autoescape(['html', 'xml'])
)
template = env.get_template(templatefile)
return template.render(**arguments)
def render_template_with_fallback(templatefile: str, templatefallbackfile: str,
arguments: dict = {}) -> str:
"""
Use templatefile if possible, otherwise fall back to templatefallbackfile
"""
from jinja2.exceptions import TemplateNotFound
try:
return render_template(templatefile, arguments)
except TemplateNotFound:
return render_template(templatefallbackfile, arguments)

View File

@ -63,7 +63,7 @@ ignore = ["freqtrade/vendor/**"]
[tool.ruff]
line-length = 100
extend-exclude = [".env"]
extend-exclude = [".env", ".venv"]
target-version = "py38"
extend-select = [
"C90", # mccabe

View File

@ -7,8 +7,8 @@
-r docs/requirements-docs.txt
coveralls==3.3.1
ruff==0.0.284
mypy==1.5.0
ruff==0.0.286
mypy==1.5.1
pre-commit==3.3.3
pytest==7.4.0
pytest-asyncio==0.21.1
@ -17,10 +17,10 @@ pytest-mock==3.11.1
pytest-random-order==1.1.0
isort==5.12.0
# For datetime mocking
time-machine==2.11.0
time-machine==2.12.0
# Convert jupyter notebooks to markdown documents
nbconvert==7.7.3
nbconvert==7.7.4
# mypy types
types-cachetools==5.3.0.6

View File

@ -4,8 +4,8 @@
# Required for freqai-rl
torch==2.0.1
#until these branches will be released we can use this
gymnasium==0.28.1
stable_baselines3==2.0.0
gymnasium==0.29.1
stable_baselines3==2.1.0
sb3_contrib>=2.0.0a9
# Progress bar for stable-baselines3 and sb3-contrib
tqdm==4.66.1

View File

@ -2,7 +2,7 @@
-r requirements.txt
# Required for hyperopt
scipy==1.11.1; python_version >= '3.9'
scipy==1.11.2; python_version >= '3.9'
scipy==1.10.1; python_version < '3.9'
scikit-learn==1.1.3
scikit-optimize==0.9.0

View File

@ -1,4 +1,4 @@
# Include all requirements to run the bot.
-r requirements.txt
plotly==5.16.0
plotly==5.16.1

View File

@ -3,11 +3,11 @@ numpy==1.24.3; python_version <= '3.8'
pandas==2.0.3
pandas-ta==0.3.14b
ccxt==4.0.59
ccxt==4.0.76
cryptography==41.0.3; platform_machine != 'armv7l'
cryptography==40.0.1; platform_machine == 'armv7l'
aiohttp==3.8.5
SQLAlchemy==2.0.19
SQLAlchemy==2.0.20
python-telegram-bot==20.4
# can't be hard-pinned due to telegram-bot pinning httpx with ~
httpx>=0.24.1
@ -16,7 +16,7 @@ cachetools==5.3.1
requests==2.31.0
urllib3==2.0.4
jsonschema==4.19.0
TA-Lib==0.4.27
TA-Lib==0.4.28
technical==1.4.0
tabulate==0.9.0
pycoingecko==3.1.0
@ -25,7 +25,7 @@ tables==3.8.0
blosc==1.11.1
joblib==1.3.2
rich==13.5.2
pyarrow==12.0.1; platform_machine != 'armv7l'
pyarrow==13.0.0; platform_machine != 'armv7l'
# find first, C search in arrays
py_find_1st==1.1.5
@ -33,14 +33,14 @@ py_find_1st==1.1.5
# Load ticker files 30% faster
python-rapidjson==1.10
# Properly format api responses
orjson==3.9.4
orjson==3.9.5
# Notify systemd
sdnotify==0.3.2
# API Server
fastapi==0.101.0
pydantic==1.10.11
fastapi==0.103.0
pydantic==2.3.0
uvicorn==0.23.2
pyjwt==2.8.0
aiofiles==23.2.1

View File

@ -97,7 +97,7 @@ setup(
'rich',
'pyarrow; platform_machine != "armv7l"',
'fastapi',
'pydantic>=1.8.0,<2.0',
'pydantic>=2.2.0',
'uvicorn',
'psutil',
'pyjwt',

View File

@ -11,7 +11,7 @@ function check_installed_pip() {
${PYTHON} -m pip > /dev/null
if [ $? -ne 0 ]; then
echo_block "Installing Pip for ${PYTHON}"
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
curl https://bootstrap.pypa.io/get-pip.py -s -o get-pip.py
${PYTHON} get-pip.py
rm get-pip.py
fi
@ -41,12 +41,12 @@ function check_installed_python() {
}
function updateenv() {
echo_block "Updating your virtual env"
if [ ! -f .env/bin/activate ]; then
echo_block "Updating your virtual environment"
if [ ! -f .venv/bin/activate ]; then
echo "Something went wrong, no virtual environment found."
exit 1
fi
source .env/bin/activate
source .venv/bin/activate
SYS_ARCH=$(uname -m)
echo "pip install in-progress. Please wait..."
${PYTHON} -m pip install --upgrade pip wheel setuptools
@ -120,7 +120,7 @@ function updateenv() {
# Install tab lib
function install_talib() {
if [ -f /usr/local/lib/libta_lib.a ]; then
if [ -f /usr/local/lib/libta_lib.a ] || [ -f /usr/local/lib/libta_lib.so ] || [ -f /usr/lib/libta_lib.so ]; then
echo "ta-lib already installed, skipping"
return
fi
@ -186,7 +186,14 @@ function install_redhat() {
# Upgrade the bot
function update() {
git pull
if [ -f .env/bin/activate ]; then
# Old environment found - updating to new environment.
recreate_environments
fi
updateenv
echo "Update completed."
echo_block "Don't forget to activate your virtual enviorment with 'source .venv/bin/activate'!"
}
function check_git_changes() {
@ -199,6 +206,27 @@ function check_git_changes() {
fi
}
function recreate_environments() {
if [ -d ".env" ]; then
# Remove old virtual env
echo "- Deleting your previous virtual env"
echo "Warning: Your new environment will be at .venv!"
rm -rf .env
fi
if [ -d ".venv" ]; then
echo "- Deleting your previous virtual env"
rm -rf .venv
fi
echo
${PYTHON} -m venv .venv
if [ $? -ne 0 ]; then
echo "Could not create virtual environment. Leaving now"
exit 1
fi
}
# Reset Develop or Stable branch
function reset() {
echo_block "Resetting branch and virtual env"
@ -225,22 +253,13 @@ function reset() {
else
echo "Reset ignored because you are not on 'stable' or 'develop'."
fi
recreate_environments
if [ -d ".env" ]; then
echo "- Deleting your previous virtual env"
rm -rf .env
fi
echo
${PYTHON} -m venv .env
if [ $? -ne 0 ]; then
echo "Could not create virtual environment. Leaving now"
exit 1
fi
updateenv
}
function config() {
echo_block "Please use 'freqtrade new-config -c config.json' to generate a new configuration file."
echo_block "Please use 'freqtrade new-config -c user_data/config.json' to generate a new configuration file."
}
function install() {
@ -266,9 +285,9 @@ function install() {
reset
config
echo_block "Run the bot !"
echo "You can now use the bot by executing 'source .env/bin/activate; freqtrade <subcommand>'."
echo "You can see the list of available bot sub-commands by executing 'source .env/bin/activate; freqtrade --help'."
echo "You verify that freqtrade is installed successfully by running 'source .env/bin/activate; freqtrade --version'."
echo "You can now use the bot by executing 'source .venv/bin/activate; freqtrade <subcommand>'."
echo "You can see the list of available bot sub-commands by executing 'source .venv/bin/activate; freqtrade --help'."
echo "You verify that freqtrade is installed successfully by running 'source .venv/bin/activate; freqtrade --version'."
}
function plot() {

View File

@ -14,7 +14,7 @@ import pytest
from freqtrade import constants
from freqtrade.commands import Arguments
from freqtrade.data.converter import ohlcv_to_dataframe
from freqtrade.data.converter import ohlcv_to_dataframe, trades_list_to_df
from freqtrade.edge import PairInfo
from freqtrade.enums import CandleType, MarginMode, RunMode, SignalDirection, TradingMode
from freqtrade.exchange import Exchange
@ -2346,7 +2346,15 @@ def trades_history():
[1565798399629, '1261813bb30', None, 'buy', 0.019627, 0.244, 0.004788987999999999],
[1565798399752, '1261813cc31', None, 'sell', 0.019626, 0.011, 0.00021588599999999999],
[1565798399862, '126181cc332', None, 'sell', 0.019626, 0.011, 0.00021588599999999999],
[1565798399872, '1261aa81333', None, 'sell', 0.019626, 0.011, 0.00021588599999999999]]
[1565798399862, '126181cc333', None, 'sell', 0.019626, 0.012, 0.00021588599999999999],
[1565798399872, '1261aa81334', None, 'sell', 0.019626, 0.011, 0.00021588599999999999]]
@pytest.fixture(scope="function")
def trades_history_df(trades_history):
trades = trades_list_to_df(trades_history)
trades['date'] = pd.to_datetime(trades['timestamp'], unit='ms', utc=True)
return trades
@pytest.fixture(scope="function")

View File

@ -22,15 +22,15 @@ def mock_order_1(is_short: bool):
return {
'id': f'1234_{direc(is_short)}',
'symbol': 'ETH/BTC',
'status': 'closed',
'status': 'open',
'side': entry_side(is_short),
'type': 'limit',
'price': 0.123,
'average': 0.123,
'amount': 123.0,
'filled': 123.0,
'filled': 50.0,
'cost': 15.129,
'remaining': 0.0,
'remaining': 123.0 - 50.0,
}
@ -103,7 +103,6 @@ def mock_trade_2(fee, is_short: bool):
close_profit_abs=-0.005584127 if is_short else 0.000584127,
exchange='binance',
is_open=False,
open_order_id=f'dry_run_sell_{direc(is_short)}_12345',
strategy='StrategyTestV3',
timeframe=5,
enter_tag='TEST1',
@ -412,7 +411,7 @@ def short_trade(fee):
# close_profit_abs=-0.6925113200000013,
exchange='binance',
is_open=True,
open_order_id='dry_run_exit_short_12345',
open_order_id=None,
strategy='DefaultStrategy',
timeframe=5,
exit_reason='sell_signal',

View File

@ -36,13 +36,13 @@ def mock_order_usdt_1_exit(is_short: bool):
return {
'id': f'prod_exit_1_{direc(is_short)}',
'symbol': 'LTC/USDT',
'status': 'closed',
'status': 'open',
'side': exit_side(is_short),
'type': 'limit',
'price': 8.0,
'amount': 2.0,
'filled': 2.0,
'remaining': 0.0,
'filled': 0.0,
'remaining': 2.0,
}
@ -96,13 +96,13 @@ def mock_order_usdt_2_exit(is_short: bool):
return {
'id': f'12366_{direc(is_short)}',
'symbol': 'NEO/USDT',
'status': 'closed',
'status': 'open',
'side': exit_side(is_short),
'type': 'limit',
'price': 2.05,
'amount': 100.0,
'filled': 100.0,
'remaining': 0.0,
'filled': 0.0,
'remaining': 100.0,
}
@ -378,7 +378,7 @@ def mock_trade_usdt_7(fee, is_short: bool):
open_date=datetime.now(tz=timezone.utc) - timedelta(minutes=17),
open_rate=2.0,
exchange='binance',
open_order_id=f'1234_{direc(is_short)}',
open_order_id=None,
strategy='StrategyTestV2',
timeframe=5,
is_short=is_short,

View File

@ -4,13 +4,14 @@ from pathlib import Path
from shutil import copyfile
import numpy as np
import pandas as pd
import pytest
from freqtrade.configuration.timerange import TimeRange
from freqtrade.data.converter import (convert_ohlcv_format, convert_trades_format,
ohlcv_fill_up_missing_data, ohlcv_to_dataframe,
reduce_dataframe_footprint, trades_dict_to_list,
trades_remove_duplicates, trades_to_ohlcv, trim_dataframe)
reduce_dataframe_footprint, trades_df_remove_duplicates,
trades_dict_to_list, trades_to_ohlcv, trim_dataframe)
from freqtrade.data.history import (get_timerange, load_data, load_pair_history,
validate_backtest_data)
from freqtrade.data.history.idatahandler import IDataHandler
@ -34,26 +35,21 @@ def test_ohlcv_to_dataframe(ohlcv_history_list, caplog):
assert log_has('Converting candle (OHLCV) data to dataframe for pair UNITTEST/BTC.', caplog)
def test_trades_to_ohlcv(ohlcv_history_list, caplog):
def test_trades_to_ohlcv(trades_history_df, caplog):
caplog.set_level(logging.DEBUG)
with pytest.raises(ValueError, match="Trade-list empty."):
trades_to_ohlcv([], '1m')
trades_to_ohlcv(pd.DataFrame(columns=trades_history_df.columns), '1m')
trades = [
[1570752011620, "13519807", None, "sell", 0.00141342, 23.0, 0.03250866],
[1570752011620, "13519808", None, "sell", 0.00141266, 54.0, 0.07628364],
[1570752017964, "13519809", None, "sell", 0.00141266, 8.0, 0.01130128]]
df = trades_to_ohlcv(trades, '1m')
df = trades_to_ohlcv(trades_history_df, '1m')
assert not df.empty
assert len(df) == 1
assert 'open' in df.columns
assert 'high' in df.columns
assert 'low' in df.columns
assert 'close' in df.columns
assert df.loc[:, 'high'][0] == 0.00141342
assert df.loc[:, 'low'][0] == 0.00141266
assert df.loc[:, 'high'][0] == 0.019627
assert df.loc[:, 'low'][0] == 0.019626
def test_ohlcv_fill_up_missing_data(testdatadir, caplog):
@ -302,13 +298,13 @@ def test_trim_dataframe(testdatadir) -> None:
assert all(data_modify.iloc[0] == data.iloc[25])
def test_trades_remove_duplicates(trades_history):
trades_history1 = trades_history * 3
assert len(trades_history1) == len(trades_history) * 3
res = trades_remove_duplicates(trades_history1)
assert len(res) == len(trades_history)
for i, t in enumerate(res):
assert t == trades_history[i]
def test_trades_df_remove_duplicates(trades_history_df):
trades_history1 = pd.concat([trades_history_df, trades_history_df, trades_history_df]
).reset_index(drop=True)
assert len(trades_history1) == len(trades_history_df) * 3
res = trades_df_remove_duplicates(trades_history1)
assert len(res) == len(trades_history_df)
assert res.equals(trades_history_df)
def test_trades_dict_to_list(fetch_trades_result):

View File

@ -6,7 +6,8 @@ from pathlib import Path
from unittest.mock import MagicMock
import pytest
from pandas import DataFrame
from pandas import DataFrame, Timestamp
from pandas.testing import assert_frame_equal
from freqtrade.configuration import TimeRange
from freqtrade.constants import AVAILABLE_DATAHANDLERS
@ -117,12 +118,6 @@ def test_datahandler_ohlcv_get_available_data(testdatadir):
assert set(paircombs) == {('UNITTEST/BTC', '5m', CandleType.SPOT)}
def test_jsondatahandler_trades_get_pairs(testdatadir):
pairs = JsonGzDataHandler.trades_get_pairs(testdatadir)
# Convert to set to avoid failures due to sorting
assert set(pairs) == {'XRP/ETH', 'XRP/OLD'}
def test_jsondatahandler_ohlcv_purge(mocker, testdatadir):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
unlinkmock = mocker.patch.object(Path, "unlink", MagicMock())
@ -246,8 +241,10 @@ def test_datahandler__check_empty_df(testdatadir, caplog):
assert log_has_re(expected_text, caplog)
@pytest.mark.parametrize('datahandler', ['parquet'])
# @pytest.mark.parametrize('datahandler', [])
@pytest.mark.skip("All datahandlers currently support trades data.")
def test_datahandler_trades_not_supported(datahandler, testdatadir, ):
# Currently disabled. Reenable should a new provider not support trades data.
dh = get_datahandler(testdatadir, datahandler)
with pytest.raises(NotImplementedError):
dh.trades_load('UNITTEST/ETH')
@ -266,18 +263,6 @@ def test_jsondatahandler_trades_load(testdatadir, caplog):
assert log_has(logmsg, caplog)
def test_jsondatahandler_trades_purge(mocker, testdatadir):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
unlinkmock = mocker.patch.object(Path, "unlink", MagicMock())
dh = JsonGzDataHandler(testdatadir)
assert not dh.trades_purge('UNITTEST/NONEXIST')
assert unlinkmock.call_count == 0
mocker.patch.object(Path, "exists", MagicMock(return_value=True))
assert dh.trades_purge('UNITTEST/NONEXIST')
assert unlinkmock.call_count == 1
@pytest.mark.parametrize('datahandler', AVAILABLE_DATAHANDLERS)
def test_datahandler_ohlcv_append(datahandler, testdatadir, ):
dh = get_datahandler(testdatadir, datahandler)
@ -291,79 +276,48 @@ def test_datahandler_ohlcv_append(datahandler, testdatadir, ):
def test_datahandler_trades_append(datahandler, testdatadir):
dh = get_datahandler(testdatadir, datahandler)
with pytest.raises(NotImplementedError):
dh.trades_append('UNITTEST/ETH', [])
dh.trades_append('UNITTEST/ETH', DataFrame())
def test_hdf5datahandler_trades_get_pairs(testdatadir):
pairs = HDF5DataHandler.trades_get_pairs(testdatadir)
@pytest.mark.parametrize('datahandler,expected', [
('jsongz', {'XRP/ETH', 'XRP/OLD'}),
('hdf5', {'XRP/ETH'}),
('feather', {'XRP/ETH'}),
('parquet', {'XRP/ETH'}),
])
def test_datahandler_trades_get_pairs(testdatadir, datahandler, expected):
pairs = get_datahandlerclass(datahandler).trades_get_pairs(testdatadir)
# Convert to set to avoid failures due to sorting
assert set(pairs) == {'XRP/ETH'}
assert set(pairs) == expected
def test_hdf5datahandler_trades_load(testdatadir):
dh = get_datahandler(testdatadir, 'hdf5')
trades = dh.trades_load('XRP/ETH')
assert isinstance(trades, list)
assert isinstance(trades, DataFrame)
trades1 = dh.trades_load('UNITTEST/NONEXIST')
assert trades1 == []
assert isinstance(trades1, DataFrame)
assert trades1.empty
# data goes from 2019-10-11 - 2019-10-13
timerange = TimeRange.parse_timerange('20191011-20191012')
trades2 = dh._trades_load('XRP/ETH', timerange)
assert len(trades) > len(trades2)
# Check that ID is None (If it's nan, it's wrong)
assert trades2[0][2] is None
assert trades2.iloc[0]['type'] is None
# unfiltered load has trades before starttime
assert len([t for t in trades if t[0] < timerange.startts * 1000]) >= 0
assert len(trades.loc[trades['timestamp'] < timerange.startts * 1000]) >= 0
# filtered list does not have trades before starttime
assert len([t for t in trades2 if t[0] < timerange.startts * 1000]) == 0
assert len(trades2.loc[trades2['timestamp'] < timerange.startts * 1000]) == 0
# unfiltered load has trades after endtime
assert len([t for t in trades if t[0] > timerange.stopts * 1000]) > 0
assert len(trades.loc[trades['timestamp'] > timerange.stopts * 1000]) >= 0
# filtered list does not have trades after endtime
assert len([t for t in trades2 if t[0] > timerange.stopts * 1000]) == 0
def test_hdf5datahandler_trades_store(testdatadir, tmpdir):
tmpdir1 = Path(tmpdir)
dh = get_datahandler(testdatadir, 'hdf5')
trades = dh.trades_load('XRP/ETH')
dh1 = get_datahandler(tmpdir1, 'hdf5')
dh1.trades_store('XRP/NEW', trades)
file = tmpdir1 / 'XRP_NEW-trades.h5'
assert file.is_file()
# Load trades back
trades_new = dh1.trades_load('XRP/NEW')
assert len(trades_new) == len(trades)
assert trades[0][0] == trades_new[0][0]
assert trades[0][1] == trades_new[0][1]
# assert trades[0][2] == trades_new[0][2] # This is nan - so comparison does not make sense
assert trades[0][3] == trades_new[0][3]
assert trades[0][4] == trades_new[0][4]
assert trades[0][5] == trades_new[0][5]
assert trades[0][6] == trades_new[0][6]
assert trades[-1][0] == trades_new[-1][0]
assert trades[-1][1] == trades_new[-1][1]
# assert trades[-1][2] == trades_new[-1][2] # This is nan - so comparison does not make sense
assert trades[-1][3] == trades_new[-1][3]
assert trades[-1][4] == trades_new[-1][4]
assert trades[-1][5] == trades_new[-1][5]
assert trades[-1][6] == trades_new[-1][6]
def test_hdf5datahandler_trades_purge(mocker, testdatadir):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
unlinkmock = mocker.patch.object(Path, "unlink", MagicMock())
dh = get_datahandler(testdatadir, 'hdf5')
assert not dh.trades_purge('UNITTEST/NONEXIST')
assert unlinkmock.call_count == 0
mocker.patch.object(Path, "exists", MagicMock(return_value=True))
assert dh.trades_purge('UNITTEST/NONEXIST')
assert unlinkmock.call_count == 1
assert len(trades2.loc[trades2['timestamp'] > timerange.stopts * 1000]) == 0
# assert len([t for t in trades2 if t[0] > timerange.stopts * 1000]) == 0
@pytest.mark.parametrize('pair,timeframe,candle_type,candle_append,startdt,enddt', [
@ -490,50 +444,42 @@ def test_hdf5datahandler_ohlcv_purge(mocker, testdatadir):
assert unlinkmock.call_count == 2
def test_featherdatahandler_trades_load(testdatadir):
dh = get_datahandler(testdatadir, 'feather')
@pytest.mark.parametrize('datahandler', ['jsongz', 'hdf5', 'feather', 'parquet'])
def test_datahandler_trades_load(testdatadir, datahandler):
dh = get_datahandler(testdatadir, datahandler)
trades = dh.trades_load('XRP/ETH')
assert isinstance(trades, list)
assert trades[0][0] == 1570752011620
assert trades[-1][-1] == 0.1986231
assert isinstance(trades, DataFrame)
assert trades.iloc[0]['timestamp'] == 1570752011620
assert trades.iloc[0]['date'] == Timestamp('2019-10-11 00:00:11.620000+0000')
assert trades.iloc[-1]['cost'] == 0.1986231
trades1 = dh.trades_load('UNITTEST/NONEXIST')
assert trades1 == []
assert isinstance(trades, DataFrame)
assert trades1.empty
def test_featherdatahandler_trades_store(testdatadir, tmpdir):
@pytest.mark.parametrize('datahandler', ['jsongz', 'hdf5', 'feather', 'parquet'])
def test_datahandler_trades_store(testdatadir, tmpdir, datahandler):
tmpdir1 = Path(tmpdir)
dh = get_datahandler(testdatadir, 'feather')
dh = get_datahandler(testdatadir, datahandler)
trades = dh.trades_load('XRP/ETH')
dh1 = get_datahandler(tmpdir1, 'feather')
dh1 = get_datahandler(tmpdir1, datahandler)
dh1.trades_store('XRP/NEW', trades)
file = tmpdir1 / 'XRP_NEW-trades.feather'
file = tmpdir1 / f'XRP_NEW-trades.{dh1._get_file_extension()}'
assert file.is_file()
# Load trades back
trades_new = dh1.trades_load('XRP/NEW')
assert_frame_equal(trades, trades_new, check_exact=True)
assert len(trades_new) == len(trades)
assert trades[0][0] == trades_new[0][0]
assert trades[0][1] == trades_new[0][1]
# assert trades[0][2] == trades_new[0][2] # This is nan - so comparison does not make sense
assert trades[0][3] == trades_new[0][3]
assert trades[0][4] == trades_new[0][4]
assert trades[0][5] == trades_new[0][5]
assert trades[0][6] == trades_new[0][6]
assert trades[-1][0] == trades_new[-1][0]
assert trades[-1][1] == trades_new[-1][1]
# assert trades[-1][2] == trades_new[-1][2] # This is nan - so comparison does not make sense
assert trades[-1][3] == trades_new[-1][3]
assert trades[-1][4] == trades_new[-1][4]
assert trades[-1][5] == trades_new[-1][5]
assert trades[-1][6] == trades_new[-1][6]
def test_featherdatahandler_trades_purge(mocker, testdatadir):
@pytest.mark.parametrize('datahandler', ['jsongz', 'hdf5', 'feather', 'parquet'])
def test_datahandler_trades_purge(mocker, testdatadir, datahandler):
mocker.patch.object(Path, "exists", MagicMock(return_value=False))
unlinkmock = mocker.patch.object(Path, "unlink", MagicMock())
dh = get_datahandler(testdatadir, 'feather')
dh = get_datahandler(testdatadir, datahandler)
assert not dh.trades_purge('UNITTEST/NONEXIST')
assert unlinkmock.call_count == 0

View File

@ -129,9 +129,14 @@ def test_get_pair_dataframe(mocker, default_conf, ohlcv_history, candle_type):
default_conf["runmode"] = RunMode.BACKTEST
dp = DataProvider(default_conf, exchange)
assert dp.runmode == RunMode.BACKTEST
assert isinstance(dp.get_pair_dataframe(
"UNITTEST/BTC", timeframe, candle_type=candle_type), DataFrame)
# assert dp.get_pair_dataframe("NONESENSE/AAA", timeframe).empty
df = dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type)
assert isinstance(df, DataFrame)
assert len(df) == 3 # ohlcv_history mock has just 3 rows
dp._set_dataframe_max_date(ohlcv_history.iloc[-1]['date'])
df = dp.get_pair_dataframe("UNITTEST/BTC", timeframe, candle_type=candle_type)
assert isinstance(df, DataFrame)
assert len(df) == 2 # ohlcv_history is limited to 2 rows now
def test_available_pairs(mocker, default_conf, ohlcv_history):
@ -259,7 +264,7 @@ def test_orderbook(mocker, default_conf, order_book_l2):
assert order_book_l2.call_args_list[0][0][0] == 'ETH/BTC'
assert order_book_l2.call_args_list[0][0][1] >= 5
assert type(res) is dict
assert isinstance(res, dict)
assert 'bids' in res
assert 'asks' in res
@ -272,7 +277,7 @@ def test_market(mocker, default_conf, markets):
dp = DataProvider(default_conf, exchange)
res = dp.market('ETH/BTC')
assert type(res) is dict
assert isinstance(res, dict)
assert 'symbol' in res
assert res['symbol'] == 'ETH/BTC'
@ -286,7 +291,7 @@ def test_ticker(mocker, default_conf, tickers):
exchange = get_patched_exchange(mocker, default_conf)
dp = DataProvider(default_conf, exchange)
res = dp.ticker('ETH/BTC')
assert type(res) is dict
assert isinstance(res, dict)
assert 'symbol' in res
assert res['symbol'] == 'ETH/BTC'

View File

@ -3,6 +3,7 @@
import json
import logging
import uuid
from datetime import timedelta
from pathlib import Path
from shutil import copyfile
from unittest.mock import MagicMock, PropertyMock
@ -26,7 +27,7 @@ from freqtrade.enums import CandleType
from freqtrade.exchange import timeframe_to_minutes
from freqtrade.misc import file_dump_json
from freqtrade.resolvers import StrategyResolver
from freqtrade.util import dt_utc
from freqtrade.util import dt_ts, dt_utc
from tests.conftest import (CURRENT_TEST_STRATEGY, EXMS, get_patched_exchange, log_has, log_has_re,
patch_exchange)
@ -569,7 +570,10 @@ def test_refresh_backtest_trades_data(mocker, default_conf, markets, caplog, tes
def test_download_trades_history(trades_history, mocker, default_conf, testdatadir, caplog,
tmpdir) -> None:
tmpdir, time_machine) -> None:
start_dt = dt_utc(2023, 1, 1)
time_machine.move_to(start_dt, tick=False)
tmpdir1 = Path(tmpdir)
ght_mock = MagicMock(side_effect=lambda pair, *args, **kwargs: (pair, trades_history))
mocker.patch(f'{EXMS}.get_historic_trades', ght_mock)
@ -581,8 +585,13 @@ def test_download_trades_history(trades_history, mocker, default_conf, testdatad
assert _download_trades_history(data_handler=data_handler, exchange=exchange,
pair='ETH/BTC')
assert log_has("New Amount of trades: 5", caplog)
assert log_has("Current Amount of trades: 0", caplog)
assert log_has("New Amount of trades: 6", caplog)
assert ght_mock.call_count == 1
# Default "since" - 30 days before current day.
assert ght_mock.call_args_list[0][1]['since'] == dt_ts(start_dt - timedelta(days=30))
assert file1.is_file()
caplog.clear()
ght_mock.reset_mock()
since_time = int(trades_history[-3][0] // 1000)
@ -599,6 +608,7 @@ def test_download_trades_history(trades_history, mocker, default_conf, testdatad
file1.unlink()
mocker.patch(f'{EXMS}.get_historic_trades', MagicMock(side_effect=ValueError))
caplog.clear()
assert not _download_trades_history(data_handler=data_handler, exchange=exchange,
pair='ETH/BTC')
@ -620,7 +630,7 @@ def test_download_trades_history(trades_history, mocker, default_conf, testdatad
assert int(ght_mock.call_args_list[0][1]['since'] // 1000) == since_time
assert ght_mock.call_args_list[0][1]['from_id'] is None
assert log_has_re(r'Start earlier than available data. Redownloading trades for.*', caplog)
assert log_has_re(r'Start .* earlier than available data. Redownloading trades for.*', caplog)
_clean_test_file(file2)
@ -651,10 +661,10 @@ def test_convert_trades_to_ohlcv(testdatadir, tmpdir, caplog):
assert_frame_equal(dfbak_1m, df_1m, check_exact=True)
assert_frame_equal(dfbak_5m, df_5m, check_exact=True)
assert not log_has('Could not convert NoDatapair to OHLCV.', caplog)
msg = 'Could not convert NoDatapair to OHLCV.'
assert not log_has(msg, caplog)
convert_trades_to_ohlcv(['NoDatapair'], timeframes=['1m', '5m'],
data_format_trades='jsongz',
datadir=tmpdir1, timerange=tr, erase=True)
assert log_has('Could not convert NoDatapair to OHLCV.', caplog)
assert log_has(msg, caplog)

View File

@ -2470,7 +2470,7 @@ def test_refresh_latest_ohlcv_inv_result(default_conf, mocker, caplog):
assert exchange._klines
assert exchange._api_async.fetch_ohlcv.call_count == 2
assert type(res) is dict
assert isinstance(res, dict)
assert len(res) == 1
# Test that each is in list at least once as order is not guaranteed
assert log_has("Error loading ETH/BTC. Result was [[]].", caplog)
@ -2854,7 +2854,7 @@ async def test__async_fetch_trades(default_conf, mocker, caplog, exchange_name,
pair = 'ETH/BTC'
res = await exchange._async_fetch_trades(pair, since=None, params=None)
assert type(res) is list
assert isinstance(res, list)
assert isinstance(res[0], list)
assert isinstance(res[1], list)
@ -2954,9 +2954,9 @@ async def test__async_get_trade_history_id(default_conf, mocker, exchange_name,
ret = await exchange._async_get_trade_history_id(pair,
since=fetch_trades_result[0]['timestamp'],
until=fetch_trades_result[-1]['timestamp'] - 1)
assert type(ret) is tuple
assert isinstance(ret, tuple)
assert ret[0] == pair
assert type(ret[1]) is list
assert isinstance(ret[1], list)
assert len(ret[1]) == len(fetch_trades_result)
assert exchange._api_async.fetch_trades.call_count == 3
fetch_trades_cal = exchange._api_async.fetch_trades.call_args_list
@ -2992,9 +2992,9 @@ async def test__async_get_trade_history_time(default_conf, mocker, caplog, excha
pair,
since=fetch_trades_result[0]['timestamp'],
until=fetch_trades_result[-1]['timestamp'] - 1)
assert type(ret) is tuple
assert isinstance(ret, tuple)
assert ret[0] == pair
assert type(ret[1]) is list
assert isinstance(ret[1], list)
assert len(ret[1]) == len(fetch_trades_result)
assert exchange._api_async.fetch_trades.call_count == 2
fetch_trades_cal = exchange._api_async.fetch_trades.call_args_list
@ -3028,9 +3028,9 @@ async def test__async_get_trade_history_time_empty(default_conf, mocker, caplog,
pair = 'ETH/BTC'
ret = await exchange._async_get_trade_history_time(pair, since=trades_history[0][0],
until=trades_history[-1][0] - 1)
assert type(ret) is tuple
assert isinstance(ret, tuple)
assert ret[0] == pair
assert type(ret[1]) is list
assert isinstance(ret[1], list)
assert len(ret[1]) == len(trades_history) - 1
assert exchange._async_fetch_trades.call_count == 2
fetch_trades_cal = exchange._async_fetch_trades.call_args_list

View File

@ -97,9 +97,9 @@ def mock_pytorch_mlp_model_training_parameters() -> Dict[str, Any]:
return {
"learning_rate": 3e-4,
"trainer_kwargs": {
"max_iters": 1,
"n_steps": None,
"batch_size": 64,
"max_n_eval_batches": 1,
"n_epochs": 1,
},
"model_kwargs": {
"hidden_dim": 32,

View File

@ -20,7 +20,7 @@ from freqtrade.data.dataprovider import DataProvider
from freqtrade.data.history import get_timerange
from freqtrade.enums import CandleType, ExitType, RunMode
from freqtrade.exceptions import DependencyException, OperationalException
from freqtrade.exchange.exchange import timeframe_to_next_date
from freqtrade.exchange import timeframe_to_next_date, timeframe_to_prev_date
from freqtrade.optimize.backtest_caching import get_backtest_metadata_filename, get_strategy_run_id
from freqtrade.optimize.backtesting import Backtesting
from freqtrade.persistence import LocalTrade, Trade
@ -1122,10 +1122,10 @@ def test_backtest_dataprovider_analyzed_df(default_conf, fee, mocker, testdatadi
processed = backtesting.strategy.advise_all_indicators(data)
min_date, max_date = get_timerange(processed)
global count
count = 0
def tmp_confirm_entry(pair, current_time, **kwargs):
nonlocal count
dp = backtesting.strategy.dp
df, _ = dp.get_analyzed_dataframe(pair, backtesting.strategy.timeframe)
current_candle = df.iloc[-1].squeeze()
@ -1135,8 +1135,13 @@ def test_backtest_dataprovider_analyzed_df(default_conf, fee, mocker, testdatadi
assert candle_date == current_time
# These asserts don't properly raise as they are nested,
# therefore we increment count and assert for that.
global count
count = count + 1
df = dp.get_pair_dataframe(pair, backtesting.strategy.timeframe)
prior_time = timeframe_to_prev_date(backtesting.strategy.timeframe,
candle_date - timedelta(seconds=1))
assert prior_time == df.iloc[-1].squeeze()['date']
assert df.iloc[-1].squeeze()['date'] < current_time
count += 1
backtesting.strategy.confirm_trade_entry = tmp_confirm_entry
backtesting.backtest(
@ -1354,11 +1359,11 @@ def test_backtest_multi_pair(default_conf, fee, mocker, tres, pair, testdatadir)
# Cached data correctly removed amounts
offset = 1 if tres == 0 else 0
removed_candles = len(data[pair]) - offset - backtesting.strategy.startup_candle_count
removed_candles = len(data[pair]) - offset
assert len(backtesting.dataprovider.get_analyzed_dataframe(pair, '5m')[0]) == removed_candles
assert len(
backtesting.dataprovider.get_analyzed_dataframe('NXT/BTC', '5m')[0]
) == len(data['NXT/BTC']) - 1 - backtesting.strategy.startup_candle_count
) == len(data['NXT/BTC']) - 1
backtesting.strategy.max_open_trades = 1
backtesting.config.update({'max_open_trades': 1})

View File

@ -1989,9 +1989,9 @@ def test_select_order(fee, is_short):
# Open buy order, no sell order
order = trades[0].select_order(trades[0].entry_side, True)
assert order is None
order = trades[0].select_order(trades[0].entry_side, False)
assert order is not None
order = trades[0].select_order(trades[0].entry_side, False)
assert order is None
order = trades[0].select_order(trades[0].exit_side, None)
assert order is None
@ -2462,7 +2462,16 @@ def test_select_filled_orders(fee):
# Closed buy order, no sell order
orders = trades[0].select_filled_orders('buy')
assert isinstance(orders, list)
assert len(orders) == 0
orders = trades[0].select_filled_orders('sell')
assert orders is not None
assert len(orders) == 0
# closed buy order, and closed sell order
orders = trades[1].select_filled_orders('buy')
assert isinstance(orders, list)
assert len(orders) == 1
order = orders[0]
assert order.amount > 0
@ -2470,33 +2479,25 @@ def test_select_filled_orders(fee):
assert order.side == 'buy'
assert order.ft_order_side == 'buy'
assert order.status == 'closed'
orders = trades[0].select_filled_orders('sell')
assert orders is not None
assert len(orders) == 0
# closed buy order, and closed sell order
orders = trades[1].select_filled_orders('buy')
assert orders is not None
assert len(orders) == 1
orders = trades[1].select_filled_orders('sell')
assert orders is not None
assert isinstance(orders, list)
assert len(orders) == 1
# Has open buy order
orders = trades[3].select_filled_orders('buy')
assert orders is not None
assert isinstance(orders, list)
assert len(orders) == 0
orders = trades[3].select_filled_orders('sell')
assert orders is not None
assert isinstance(orders, list)
assert len(orders) == 0
# Open sell order
orders = trades[4].select_filled_orders('buy')
assert orders is not None
assert isinstance(orders, list)
assert len(orders) == 1
orders = trades[4].select_filled_orders('sell')
assert orders is not None
assert isinstance(orders, list)
assert len(orders) == 0

View File

@ -553,7 +553,7 @@ def test_VolumePairList_whitelist_gen(mocker, whitelist_conf, shitcoinmarkets, t
assert isinstance(whitelist, list)
# Verify length of pairlist matches (used for ShuffleFilter without seed)
if type(whitelist_result) is list:
if isinstance(whitelist_result, list):
assert whitelist == whitelist_result
else:
len(whitelist) == whitelist_result

View File

@ -363,9 +363,8 @@ def test_rpc_delete_trade(mocker, default_conf, fee, markets, caplog, is_short):
res = rpc._rpc_delete('2')
assert isinstance(res, dict)
assert cancel_mock.call_count == 1
assert stoploss_mock.call_count == 1
assert res['cancel_order_count'] == 2
assert res['cancel_order_count'] == 1
stoploss_mock = mocker.patch(f'{EXMS}.cancel_stoploss_order', side_effect=InvalidOrderException)

View File

@ -706,7 +706,7 @@ def test_api_delete_trade(botclient, mocker, fee, markets, is_short):
assert len(trades) - 1 == len(Trade.session.scalars(select(Trade)).all())
rc = client_delete(client, f"{BASE_URI}/trades/2")
assert_response(rc)
assert rc.json()['result_msg'] == 'Deleted trade 2. Closed 2 open orders.'
assert rc.json()['result_msg'] == 'Deleted trade 2. Closed 1 open orders.'
assert len(trades) - 2 == len(Trade.session.scalars(select(Trade)).all())
assert stoploss_mock.call_count == 1
@ -841,7 +841,7 @@ def test_api_edge_disabled(botclient, mocker, ticker, fee, markets):
'profit_closed_percent_sum': -1.5, 'profit_closed_ratio': -6.739057628404269e-06,
'profit_closed_percent': -0.0, 'winning_trades': 0, 'losing_trades': 2,
'profit_factor': 0.0, 'winrate': 0.0, 'expectancy': -0.0033695635,
'expectancy_ratio': -1.0, 'trading_volume': 91.074,
'expectancy_ratio': -1.0, 'trading_volume': 75.945,
}
),
(
@ -857,7 +857,7 @@ def test_api_edge_disabled(botclient, mocker, ticker, fee, markets):
'profit_closed_percent_sum': 1.5, 'profit_closed_ratio': 7.391275897987988e-07,
'profit_closed_percent': 0.0, 'winning_trades': 2, 'losing_trades': 0,
'profit_factor': None, 'winrate': 1.0, 'expectancy': 0.0003695635,
'expectancy_ratio': 100, 'trading_volume': 91.074,
'expectancy_ratio': 100, 'trading_volume': 75.945,
}
),
(
@ -874,7 +874,7 @@ def test_api_edge_disabled(botclient, mocker, ticker, fee, markets):
'profit_closed_percent': -0.0, 'winning_trades': 1, 'losing_trades': 1,
'profit_factor': 0.02775724835771106, 'winrate': 0.5,
'expectancy': -0.0027145635000000003, 'expectancy_ratio': -0.48612137582114445,
'trading_volume': 91.074,
'trading_volume': 75.945,
}
)
])
@ -1125,7 +1125,7 @@ def test_api_status(botclient, mocker, ticker, fee, markets, is_short,
assert_response(rc)
resp_values = rc.json()
assert len(resp_values) == 4
assert resp_values[0]['profit_abs'] is None
assert resp_values[0]['profit_abs'] == 0.0
def test_api_version(botclient):
@ -1429,12 +1429,12 @@ def test_api_pair_candles(botclient, ohlcv_history):
assert len(rc.json()['data']) == amount
assert (rc.json()['data'] ==
[['2017-11-26 08:50:00', 8.794e-05, 8.948e-05, 8.794e-05, 8.88e-05, 0.0877869,
[['2017-11-26T08:50:00Z', 8.794e-05, 8.948e-05, 8.794e-05, 8.88e-05, 0.0877869,
None, 0, 0, 0, 0, 1511686200000, None, None, None, None],
['2017-11-26 08:55:00', 8.88e-05, 8.942e-05, 8.88e-05,
['2017-11-26T08:55:00Z', 8.88e-05, 8.942e-05, 8.88e-05,
8.893e-05, 0.05874751, 8.886500000000001e-05, 1, 0, 0, 0, 1511686500000, 8.893e-05,
None, None, None],
['2017-11-26 09:00:00', 8.891e-05, 8.893e-05, 8.875e-05, 8.877e-05,
['2017-11-26T09:00:00Z', 8.891e-05, 8.893e-05, 8.875e-05, 8.877e-05,
0.7039405, 8.885e-05, 0, 0, 0, 0, 1511686800000, None, None, None, None]
])
@ -1448,13 +1448,13 @@ def test_api_pair_candles(botclient, ohlcv_history):
f"{BASE_URI}/pair_candles?limit={amount}&pair=XRP%2FBTC&timeframe={timeframe}")
assert_response(rc)
assert (rc.json()['data'] ==
[['2017-11-26 08:50:00', 8.794e-05, 8.948e-05, 8.794e-05, 8.88e-05, 0.0877869,
[['2017-11-26T08:50:00Z', 8.794e-05, 8.948e-05, 8.794e-05, 8.88e-05, 0.0877869,
None, 0, None, 0, 0, None, 1511686200000, None, None, None, None],
['2017-11-26 08:55:00', 8.88e-05, 8.942e-05, 8.88e-05,
8.893e-05, 0.05874751, 8.886500000000001e-05, 1, 0.0, 0, 0, '2017-11-26 08:55:00',
['2017-11-26T08:55:00Z', 8.88e-05, 8.942e-05, 8.88e-05,
8.893e-05, 0.05874751, 8.886500000000001e-05, 1, 0.0, 0, 0, '2017-11-26T08:55:00Z',
1511686500000, 8.893e-05, None, None, None],
['2017-11-26 09:00:00', 8.891e-05, 8.893e-05, 8.875e-05, 8.877e-05,
0.7039405, 8.885e-05, 0, 0.0, 0, 0, '2017-11-26 09:00:00', 1511686800000,
['2017-11-26T09:00:00Z', 8.891e-05, 8.893e-05, 8.875e-05, 8.877e-05,
0.7039405, 8.885e-05, 0, 0.0, 0, 0, '2017-11-26T09:00:00Z', 1511686800000,
None, None, None, None]
])
@ -1511,7 +1511,7 @@ def test_api_pair_history(botclient, mocker):
date_col_idx = [idx for idx, c in enumerate(result['columns']) if c == 'date'][0]
rsi_col_idx = [idx for idx, c in enumerate(result['columns']) if c == 'rsi'][0]
assert data[0][date_col_idx] == '2018-01-11 00:00:00'
assert data[0][date_col_idx] == '2018-01-11T00:00:00Z'
assert data[0][rsi_col_idx] is not None
assert data[0][rsi_col_idx] > 0
assert lfm.call_count == 1

View File

@ -316,7 +316,7 @@ async def test_telegram_status_multi_entry(default_conf, update, mocker, fee) ->
create_mock_trades(fee)
trades = Trade.get_open_trades()
trade = trades[0]
trade = trades[3]
# Average may be empty on some exchanges
trade.orders[0].average = 0
trade.orders.append(Order(
@ -344,9 +344,9 @@ async def test_telegram_status_multi_entry(default_conf, update, mocker, fee) ->
await telegram._status(update=update, context=MagicMock())
assert msg_mock.call_count == 4
msg = msg_mock.call_args_list[0][0][0]
msg = msg_mock.call_args_list[3][0][0]
assert re.search(r'Number of Entries.*2', msg)
assert re.search(r'Number of Exits.*0', msg)
assert re.search(r'Number of Exits.*1', msg)
assert re.search(r'Average Entry Price', msg)
assert re.search(r'Order filled', msg)
assert re.search(r'Close Date:', msg) is None

View File

@ -133,7 +133,7 @@ def test_parse_args_backtesting_custom() -> None:
assert call_args['command'] == 'backtesting'
assert call_args['func'] is not None
assert call_args['timeframe'] == '1m'
assert type(call_args['strategy_list']) is list
assert isinstance(call_args['strategy_list'], list)
assert len(call_args['strategy_list']) == 2

View File

@ -28,9 +28,9 @@ from tests.conftest import (EXMS, create_mock_trades, create_mock_trades_usdt,
get_patched_freqtradebot, get_patched_worker, log_has, log_has_re,
patch_edge, patch_exchange, patch_get_signal, patch_wallet,
patch_whitelist)
from tests.conftest_trades import (MOCK_TRADE_COUNT, entry_side, exit_side, mock_order_1,
mock_order_2, mock_order_2_sell, mock_order_3, mock_order_3_sell,
mock_order_4, mock_order_5_stoploss, mock_order_6_sell)
from tests.conftest_trades import (MOCK_TRADE_COUNT, entry_side, exit_side, mock_order_2,
mock_order_2_sell, mock_order_3, mock_order_3_sell, mock_order_4,
mock_order_5_stoploss, mock_order_6_sell)
from tests.conftest_trades_usdt import mock_trade_usdt_4
@ -5329,8 +5329,8 @@ def test_sync_wallet_dry_run(mocker, default_conf_usdt, ticker_usdt, fee, limit_
@pytest.mark.usefixtures("init_persistence")
@pytest.mark.parametrize("is_short,buy_calls,sell_calls", [
(False, 1, 2),
(True, 1, 2),
(False, 1, 1),
(True, 1, 1),
])
def test_cancel_all_open_orders(mocker, default_conf_usdt, fee, limit_order, limit_order_open,
is_short, buy_calls, sell_calls):
@ -5387,7 +5387,7 @@ def test_startup_update_open_orders(mocker, default_conf_usdt, fee, caplog, is_s
freqtrade.config['dry_run'] = False
freqtrade.startup_update_open_orders()
assert len(Order.get_open_orders()) == 3
assert len(Order.get_open_orders()) == 4
matching_buy_order = mock_order_4(is_short=is_short)
matching_buy_order.update({
'status': 'closed',
@ -5395,7 +5395,7 @@ def test_startup_update_open_orders(mocker, default_conf_usdt, fee, caplog, is_s
mocker.patch(f'{EXMS}.fetch_order', return_value=matching_buy_order)
freqtrade.startup_update_open_orders()
# Only stoploss and sell orders are kept open
assert len(Order.get_open_orders()) == 2
assert len(Order.get_open_orders()) == 3
caplog.clear()
mocker.patch(f'{EXMS}.fetch_order', side_effect=ExchangeError)
@ -5407,7 +5407,7 @@ def test_startup_update_open_orders(mocker, default_conf_usdt, fee, caplog, is_s
# Orders which are no longer found after X days should be assumed as canceled.
freqtrade.startup_update_open_orders()
assert log_has_re(r"Order is older than \d days.*", caplog)
assert hto_mock.call_count == 2
assert hto_mock.call_count == 3
assert hto_mock.call_args_list[0][0][0]['status'] == 'canceled'
assert hto_mock.call_args_list[1][0][0]['status'] == 'canceled'
@ -5451,7 +5451,6 @@ def test_update_trades_without_assigned_fees(mocker, default_conf_usdt, fee, is_
side_effect=[
patch_with_fee(mock_order_2_sell(is_short=is_short)),
patch_with_fee(mock_order_3_sell(is_short=is_short)),
patch_with_fee(mock_order_1(is_short=is_short)),
patch_with_fee(mock_order_2(is_short=is_short)),
patch_with_fee(mock_order_3(is_short=is_short)),
patch_with_fee(mock_order_4(is_short=is_short)),
@ -5561,14 +5560,15 @@ def test_handle_insufficient_funds(mocker, default_conf_usdt, fee, is_short, cap
caplog.clear()
# No open order
trade = trades[0]
trade = trades[1]
reset_open_orders(trade)
assert trade.open_order_id is None
assert trade.stoploss_order_id is None
freqtrade.handle_insufficient_funds(trade)
order = mock_order_1(is_short=is_short)
assert log_has_re(r"Order Order(.*order_id=" + order['id'] + ".*) is no longer open.", caplog)
order = trade.orders[0]
assert log_has_re(r"Order Order(.*order_id=" + order.order_id + ".*) is no longer open.",
caplog)
assert mock_fo.call_count == 0
assert mock_uts.call_count == 0
# No change to orderid - as update_trade_state is mocked

View File

@ -9,8 +9,7 @@ import pytest
from freqtrade.misc import (dataframe_to_json, decimals_per_coin, deep_merge_dicts, file_dump_json,
file_load_json, is_file_in_dir, json_to_dataframe, pair_to_filename,
parse_db_uri_for_logging, plural, render_template,
render_template_with_fallback, round_coin_value, safe_value_fallback,
parse_db_uri_for_logging, plural, round_coin_value, safe_value_fallback,
safe_value_fallback2)
@ -177,20 +176,6 @@ def test_plural() -> None:
assert plural(-1.5, "ox", "oxen") == "oxen"
def test_render_template_fallback(mocker):
from jinja2.exceptions import TemplateNotFound
with pytest.raises(TemplateNotFound):
val = render_template(
templatefile='subtemplates/indicators_does-not-exist.j2',)
val = render_template_with_fallback(
templatefile='strategy_subtemplates/indicators_does-not-exist.j2',
templatefallbackfile='strategy_subtemplates/indicators_minimal.j2',
)
assert isinstance(val, str)
assert 'if self.dp' in val
@pytest.mark.parametrize('conn_url,expected', [
("postgresql+psycopg2://scott123:scott123@host:1245/dbname",
"postgresql+psycopg2://scott123:*****@host:1245/dbname"),

BIN
tests/testdata/XRP_ETH-trades.parquet vendored Normal file

Binary file not shown.

View File

@ -0,0 +1,17 @@
import pytest
from freqtrade.util import render_template, render_template_with_fallback
def test_render_template_fallback():
from jinja2.exceptions import TemplateNotFound
with pytest.raises(TemplateNotFound):
val = render_template(
templatefile='subtemplates/indicators_does-not-exist.j2',)
val = render_template_with_fallback(
templatefile='strategy_subtemplates/indicators_does-not-exist.j2',
templatefallbackfile='strategy_subtemplates/indicators_minimal.j2',
)
assert isinstance(val, str)
assert 'if self.dp' in val