Merge remote-tracking branch 'upstream/develop' into feature/10348

This commit is contained in:
jainanuj94 2024-07-28 22:26:58 +05:30
commit aa327643f5
81 changed files with 7613 additions and 2010 deletions

View File

@ -80,6 +80,11 @@ jobs:
# Allow failure for coveralls # Allow failure for coveralls
coveralls || true coveralls || true
- name: Run json schema extract
# This should be kept before the repository check to ensure that the schema is up-to-date
run: |
python build_helpers/extract_config_json_schema.py
- name: Check for repository changes - name: Check for repository changes
run: | run: |
if [ -n "$(git status --porcelain)" ]; then if [ -n "$(git status --porcelain)" ]; then

View File

@ -9,14 +9,14 @@ repos:
# stages: [push] # stages: [push]
- repo: https://github.com/pre-commit/mirrors-mypy - repo: https://github.com/pre-commit/mirrors-mypy
rev: "v1.10.1" rev: "v1.11.0"
hooks: hooks:
- id: mypy - id: mypy
exclude: build_helpers exclude: build_helpers
additional_dependencies: additional_dependencies:
- types-cachetools==5.3.0.7 - types-cachetools==5.4.0.20240717
- types-filelock==3.2.7 - types-filelock==3.2.7
- types-requests==2.32.0.20240622 - types-requests==2.32.0.20240712
- types-tabulate==0.9.0.20240106 - types-tabulate==0.9.0.20240106
- types-python-dateutil==2.9.0.20240316 - types-python-dateutil==2.9.0.20240316
- SQLAlchemy==2.0.31 - SQLAlchemy==2.0.31
@ -31,7 +31,7 @@ repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit - repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version. # Ruff version.
rev: 'v0.5.0' rev: 'v0.5.4'
hooks: hooks:
- id: ruff - id: ruff

View File

@ -0,0 +1,17 @@
"""Script to extract the configuration json schema from config_schema.py file."""
from pathlib import Path
import rapidjson
from freqtrade.configuration.config_schema import CONF_SCHEMA
def extract_config_json_schema():
schema_filename = Path(__file__).parent / "schema.json"
with schema_filename.open("w") as f:
rapidjson.dump(CONF_SCHEMA, f, indent=2)
if __name__ == "__main__":
extract_config_json_schema()

1601
build_helpers/schema.json Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,4 @@
--- ---
version: '3'
services: services:
freqtrade: freqtrade:
image: freqtradeorg/freqtrade:stable image: freqtradeorg/freqtrade:stable

View File

@ -1,5 +1,4 @@
--- ---
version: '3'
services: services:
freqtrade: freqtrade:
image: freqtradeorg/freqtrade:stable_freqaitorch image: freqtradeorg/freqtrade:stable_freqaitorch

View File

@ -1,5 +1,4 @@
--- ---
version: '3'
services: services:
ft_jupyterlab: ft_jupyterlab:
build: build:

152
docs/advanced-orderflow.md Normal file
View File

@ -0,0 +1,152 @@
# Orderflow data
This guide walks you through utilizing public trade data for advanced orderflow analysis in Freqtrade.
!!! Warning "Experimental Feature"
The orderflow feature is currently in beta and may be subject to changes in future releases. Please report any issues or feedback on the [Freqtrade GitHub repository](https://github.com/freqtrade/freqtrade/issues).
!!! Warning "Performance"
Orderflow requires raw trades data. This data is rather large, and can cause a slow initial startup, when freqtrade needs to download the trades data for the last X candles. Additionally, enabling this feature will cause increased memory usage. Please ensure to have sufficient resources available.
## Getting Started
### Enable Public Trades
In your `config.json` file, set the `use_public_trades` option to true under the `exchange` section.
```json
"exchange": {
...
"use_public_trades": true,
}
```
### Configure Orderflow Processing
Define your desired settings for orderflow processing within the orderflow section of config.json. Here, you can adjust factors like:
- `cache_size`: How many previous orderflow candles are saved into cache instead of calculated every new candle
- `max_candles`: Filter how many candles would you like to get trades data for.
- `scale`: This controls the price bin size for the footprint chart.
- `stacked_imbalance_range`: Defines the minimum consecutive imbalanced price levels required for consideration.
- `imbalance_volume`: Filters out imbalances with volume below this threshold.
- `imbalance_ratio`: Filters out imbalances with a ratio (difference between ask and bid volume) lower than this value.
```json
"orderflow": {
"cache_size": 1000,
"max_candles": 1500,
"scale": 0.5,
"stacked_imbalance_range": 3, // needs at least this amount of imbalance next to each other
"imbalance_volume": 1, // filters out below
"imbalance_ratio": 3 // filters out ratio lower than
},
```
## Downloading Trade Data for Backtesting
To download historical trade data for backtesting, use the --dl-trades flag with the freqtrade download-data command.
```bash
freqtrade download-data -p BTC/USDT:USDT --timerange 20230101- --trading-mode futures --timeframes 5m --dl-trades
```
!!! Warning "Data availability"
Not all exchanges provide public trade data. For supported exchanges, freqtrade will warn you if public trade data is not available if you start downloading data with the `--dl-trades` flag.
## Accessing Orderflow Data
Once activated, several new columns become available in your dataframe:
``` python
dataframe["trades"] # Contains information about each individual trade.
dataframe["orderflow"] # Represents a footprint chart dict (see below)
dataframe["imbalances"] # Contains information about imbalances in the order flow.
dataframe["bid"] # Total bid volume
dataframe["ask"] # Total ask volume
dataframe["delta"] # Difference between ask and bid volume.
dataframe["min_delta"] # Minimum delta within the candle
dataframe["max_delta"] # Maximum delta within the candle
dataframe["total_trades"] # Total number of trades
dataframe["stacked_imbalances_bid"] # Price level of stacked bid imbalance
dataframe["stacked_imbalances_ask"] # Price level of stacked ask imbalance
```
You can access these columns in your strategy code for further analysis. Here's an example:
``` python
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
# Calculating cumulative delta
dataframe["cum_delta"] = cumulative_delta(dataframe["delta"])
# Accessing total trades
total_trades = dataframe["total_trades"]
...
def cumulative_delta(delta: Series):
cumdelta = delta.cumsum()
return cumdelta
```
### Footprint chart (`dataframe["orderflow"]`)
This column provides a detailed breakdown of buy and sell orders at different price levels, offering valuable insights into order flow dynamics. The `scale` parameter in your configuration determines the price bin size for this representation
The `orderflow` column contains a dict with the following structure:
``` output
{
"price": {
"bid_amount": 0.0,
"ask_amount": 0.0,
"bid": 0,
"ask": 0,
"delta": 0.0,
"total_volume": 0.0,
"total_trades": 0
}
}
```
#### Orderflow column explanation
- key: Price bin - binned at `scale` intervals
- `bid_amount`: Total volume bought at each price level.
- `ask_amount`: Total volume sold at each price level.
- `bid`: Number of buy orders at each price level.
- `ask`: Number of sell orders at each price level.
- `delta`: Difference between ask and bid volume at each price level.
- `total_volume`: Total volume (ask amount + bid amount) at each price level.
- `total_trades`: Total number of trades (ask + bid) at each price level.
By leveraging these features, you can gain valuable insights into market sentiment and potential trading opportunities based on order flow analysis.
### Raw trades data (`dataframe["trades"]`)
List with the individual trades that occurred during the candle. This data can be used for more granular analysis of order flow dynamics.
Each individual entry contains a dict with the following keys:
- `timestamp`: Timestamp of the trade.
- `date`: Date of the trade.
- `price`: Price of the trade.
- `amount`: Volume of the trade.
- `side`: Buy or sell.
- `id`: Unique identifier for the trade.
- `cost`: Total cost of the trade (price * amount).
### Imbalances (`dataframe["imbalances"]`)
This column provides a dict with information about imbalances in the order flow. An imbalance occurs when there is a significant difference between the ask and bid volume at a given price level.
Each row looks as follows - with price as index, and the corresponding bid and ask imbalance values as columns
``` output
{
"price": {
"bid_imbalance": False,
"ask_imbalance": False
}
}
```

View File

@ -114,8 +114,46 @@ services:
--strategy SampleStrategy --strategy SampleStrategy
``` ```
You can use whatever naming convention you want, freqtrade1 and 2 are arbitrary. Note, that you will need to use different database files, port mappings and telegram configurations for each instance, as mentioned above. You can use whatever naming convention you want, freqtrade1 and 2 are arbitrary. Note, that you will need to use different database files, port mappings and telegram configurations for each instance, as mentioned above.
## Use a different database system
Freqtrade is using SQLAlchemy, which supports multiple different database systems. As such, a multitude of database systems should be supported.
Freqtrade does not depend or install any additional database driver. Please refer to the [SQLAlchemy docs](https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls) on installation instructions for the respective database systems.
The following systems have been tested and are known to work with freqtrade:
* sqlite (default)
* PostgreSQL
* MariaDB
!!! Warning
By using one of the below database systems, you acknowledge that you know how to manage such a system. The freqtrade team will not provide any support with setup or maintenance (or backups) of the below database systems.
### PostgreSQL
Installation:
`pip install psycopg2-binary`
Usage:
`... --db-url postgresql+psycopg2://<username>:<password>@localhost:5432/<database>`
Freqtrade will automatically create the tables necessary upon startup.
If you're running different instances of Freqtrade, you must either setup one database per Instance or use different users / schemas for your connections.
### MariaDB / MySQL
Freqtrade supports MariaDB by using SQLAlchemy, which supports multiple different database systems.
Installation:
`pip install pymysql`
Usage:
`... --db-url mysql+pymysql://<username>:<password>@localhost:3306/<database>`
## Configure the bot running as a systemd service ## Configure the bot running as a systemd service

View File

@ -83,6 +83,10 @@ To change your **features**, you **must** set a new `identifier` in the config t
To save the models generated during a particular backtest so that you can start a live deployment from one of them instead of training a new model, you must set `save_backtest_models` to `True` in the config. To save the models generated during a particular backtest so that you can start a live deployment from one of them instead of training a new model, you must set `save_backtest_models` to `True` in the config.
!!! Note
To ensure that the model can be reused, freqAI will call your strategy with a dataframe of length 1.
If your strategy requires more data than this to generate the same features, you can't reuse backtest predictions for live deployment and need to update your `identifier` for each new backtest.
### Backtest live collected predictions ### Backtest live collected predictions
FreqAI allow you to reuse live historic predictions through the backtest parameter `--freqai-backtest-live-models`. This can be useful when you want to reuse predictions generated in dry/run for comparison or other study. FreqAI allow you to reuse live historic predictions through the backtest parameter `--freqai-backtest-live-models`. This can be useful when you want to reuse predictions generated in dry/run for comparison or other study.

View File

@ -1,6 +1,6 @@
markdown==3.6 markdown==3.6
mkdocs==1.6.0 mkdocs==1.6.0
mkdocs-material==9.5.28 mkdocs-material==9.5.29
mdx_truly_sane_lists==1.3 mdx_truly_sane_lists==1.3
pymdown-extensions==10.8.1 pymdown-extensions==10.8.1
jinja2==3.1.4 jinja2==3.1.4

View File

@ -2,7 +2,7 @@
## FreqUI ## FreqUI
FreqUI now has it's own dedicated [documentation section](frequi.md) - please refer to that section for all information regarding the FreqUI. FreqUI now has it's own dedicated [documentation section](freq-ui.md) - please refer to that section for all information regarding the FreqUI.
## Configuration ## Configuration

View File

@ -1,6 +1,13 @@
# SQL Helper # SQL Helper
This page contains some help if you want to edit your sqlite db. This page contains some help if you want to query your sqlite db.
!!! Tip "Other Database systems"
To use other Database Systems like PostgreSQL or MariaDB, you can use the same queries, but you need to use the respective client for the database system. [Click here](advanced-setup.md#use-a-different-database-system) to learn how to setup a different database system with freqtrade.
!!! Warning
If you are not familiar with SQL, you should be very careful when running queries on your database.
Always make sure to have a backup of your database before running any queries.
## Install sqlite3 ## Install sqlite3
@ -43,13 +50,25 @@ sqlite3
.schema <table_name> .schema <table_name>
``` ```
## Get all trades in the table ### Get all trades in the table
```sql ```sql
SELECT * FROM trades; SELECT * FROM trades;
``` ```
## Fix trade still open after a manual exit on the exchange ## Destructive queries
Queries that write to the database.
These queries should usually not be necessary as freqtrade tries to handle all database operations itself - or exposes them via API or telegram commands.
!!! Warning
Please make sure you have a backup of your database before running any of the below queries.
!!! Danger
You should also **never** run any writing query (`update`, `insert`, `delete`) while a bot is connected to the database.
This can and will lead to data corruption - most likely, without the possibility of recovery.
### Fix trade still open after a manual exit on the exchange
!!! Warning !!! Warning
Manually selling a pair on the exchange will not be detected by the bot and it will try to sell anyway. Whenever possible, /forceexit <tradeid> should be used to accomplish the same thing. Manually selling a pair on the exchange will not be detected by the bot and it will try to sell anyway. Whenever possible, /forceexit <tradeid> should be used to accomplish the same thing.
@ -69,7 +88,7 @@ SET is_open=0,
WHERE id=<trade_ID_to_update>; WHERE id=<trade_ID_to_update>;
``` ```
### Example #### Example
```sql ```sql
UPDATE trades UPDATE trades
@ -82,7 +101,7 @@ SET is_open=0,
WHERE id=31; WHERE id=31;
``` ```
## Remove trade from the database ### Remove trade from the database
!!! Tip "Use RPC Methods to delete trades" !!! Tip "Use RPC Methods to delete trades"
Consider using `/delete <tradeid>` via telegram or rest API. That's the recommended way to deleting trades. Consider using `/delete <tradeid>` via telegram or rest API. That's the recommended way to deleting trades.
@ -100,39 +119,3 @@ DELETE FROM trades WHERE id = 31;
!!! Warning !!! Warning
This will remove this trade from the database. Please make sure you got the correct id and **NEVER** run this query without the `where` clause. This will remove this trade from the database. Please make sure you got the correct id and **NEVER** run this query without the `where` clause.
## Use a different database system
Freqtrade is using SQLAlchemy, which supports multiple different database systems. As such, a multitude of database systems should be supported.
Freqtrade does not depend or install any additional database driver. Please refer to the [SQLAlchemy docs](https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls) on installation instructions for the respective database systems.
The following systems have been tested and are known to work with freqtrade:
* sqlite (default)
* PostgreSQL
* MariaDB
!!! Warning
By using one of the below database systems, you acknowledge that you know how to manage such a system. The freqtrade team will not provide any support with setup or maintenance (or backups) of the below database systems.
### PostgreSQL
Installation:
`pip install psycopg2-binary`
Usage:
`... --db-url postgresql+psycopg2://<username>:<password>@localhost:5432/<database>`
Freqtrade will automatically create the tables necessary upon startup.
If you're running different instances of Freqtrade, you must either setup one database per Instance or use different users / schemas for your connections.
### MariaDB / MySQL
Freqtrade supports MariaDB by using SQLAlchemy, which supports multiple different database systems.
Installation:
`pip install pymysql`
Usage:
`... --db-url mysql+pymysql://<username>:<password>@localhost:3306/<database>`

View File

@ -488,7 +488,7 @@ freqtrade test-pairlist --config config.json --quote USDT BTC
`freqtrade convert-db` can be used to convert your database from one system to another (sqlite -> postgres, postgres -> other postgres), migrating all trades, orders and Pairlocks. `freqtrade convert-db` can be used to convert your database from one system to another (sqlite -> postgres, postgres -> other postgres), migrating all trades, orders and Pairlocks.
Please refer to the [SQL cheatsheet](sql_cheatsheet.md#use-a-different-database-system) to learn about requirements for different database systems. Please refer to the [corresponding documentation](advanced-setup.md#use-a-different-database-system) to learn about requirements for different database systems.
``` ```
usage: freqtrade convert-db [-h] [--db-url PATH] [--db-url-from PATH] usage: freqtrade convert-db [-h] [--db-url PATH] [--db-url-from PATH]

View File

@ -16,6 +16,7 @@ from freqtrade.exceptions import ConfigurationError
from freqtrade.exchange import timeframe_to_minutes from freqtrade.exchange import timeframe_to_minutes
from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist
from freqtrade.resolvers import ExchangeResolver from freqtrade.resolvers import ExchangeResolver
from freqtrade.util import print_rich_table
from freqtrade.util.migrations import migrate_data from freqtrade.util.migrations import migrate_data
@ -119,8 +120,6 @@ def start_list_data(args: Dict[str, Any]) -> None:
config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE) config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE)
from tabulate import tabulate
from freqtrade.data.history import get_datahandler from freqtrade.data.history import get_datahandler
dhc = get_datahandler(config["datadir"], config["dataformat_ohlcv"]) dhc = get_datahandler(config["datadir"], config["dataformat_ohlcv"])
@ -131,8 +130,7 @@ def start_list_data(args: Dict[str, Any]) -> None:
if args["pairs"]: if args["pairs"]:
paircombs = [comb for comb in paircombs if comb[0] in args["pairs"]] paircombs = [comb for comb in paircombs if comb[0] in args["pairs"]]
title = f"Found {len(paircombs)} pair / timeframe combinations."
print(f"Found {len(paircombs)} pair / timeframe combinations.")
if not config.get("show_timerange"): if not config.get("show_timerange"):
groupedpair = defaultdict(list) groupedpair = defaultdict(list)
for pair, timeframe, candle_type in sorted( for pair, timeframe, candle_type in sorted(
@ -141,40 +139,35 @@ def start_list_data(args: Dict[str, Any]) -> None:
groupedpair[(pair, candle_type)].append(timeframe) groupedpair[(pair, candle_type)].append(timeframe)
if groupedpair: if groupedpair:
print( print_rich_table(
tabulate( [
[ (pair, ", ".join(timeframes), candle_type)
(pair, ", ".join(timeframes), candle_type) for (pair, candle_type), timeframes in groupedpair.items()
for (pair, candle_type), timeframes in groupedpair.items() ],
], ("Pair", "Timeframe", "Type"),
headers=("Pair", "Timeframe", "Type"), title,
tablefmt="psql", table_kwargs={"min_width": 50},
stralign="right",
)
) )
else: else:
paircombs1 = [ paircombs1 = [
(pair, timeframe, candle_type, *dhc.ohlcv_data_min_max(pair, timeframe, candle_type)) (pair, timeframe, candle_type, *dhc.ohlcv_data_min_max(pair, timeframe, candle_type))
for pair, timeframe, candle_type in paircombs for pair, timeframe, candle_type in paircombs
] ]
print_rich_table(
print( [
tabulate( (
[ pair,
( timeframe,
pair, candle_type,
timeframe, start.strftime(DATETIME_PRINT_FORMAT),
candle_type, end.strftime(DATETIME_PRINT_FORMAT),
start.strftime(DATETIME_PRINT_FORMAT), str(length),
end.strftime(DATETIME_PRINT_FORMAT), )
length, for pair, timeframe, candle_type, start, end, length in sorted(
) paircombs1, key=lambda x: (x[0], timeframe_to_minutes(x[1]), x[2])
for pair, timeframe, candle_type, start, end, length in sorted( )
paircombs1, key=lambda x: (x[0], timeframe_to_minutes(x[1]), x[2]) ],
) ("Pair", "Timeframe", "Type", "From", "To", "Candles"),
], summary=title,
headers=("Pair", "Timeframe", "Type", "From", "To", "Candles"), table_kwargs={"min_width": 50},
tablefmt="psql",
stralign="right",
)
) )

View File

@ -2,8 +2,6 @@ import logging
from operator import itemgetter from operator import itemgetter
from typing import Any, Dict from typing import Any, Dict
from colorama import init as colorama_init
from freqtrade.configuration import setup_utils_configuration from freqtrade.configuration import setup_utils_configuration
from freqtrade.data.btanalysis import get_latest_hyperopt_file from freqtrade.data.btanalysis import get_latest_hyperopt_file
from freqtrade.enums import RunMode from freqtrade.enums import RunMode
@ -18,6 +16,7 @@ def start_hyperopt_list(args: Dict[str, Any]) -> None:
""" """
List hyperopt epochs previously evaluated List hyperopt epochs previously evaluated
""" """
from freqtrade.optimize.hyperopt_output import HyperoptOutput
from freqtrade.optimize.hyperopt_tools import HyperoptTools from freqtrade.optimize.hyperopt_tools import HyperoptTools
config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE) config = setup_utils_configuration(args, RunMode.UTIL_NO_EXCHANGE)
@ -35,21 +34,17 @@ def start_hyperopt_list(args: Dict[str, Any]) -> None:
# Previous evaluations # Previous evaluations
epochs, total_epochs = HyperoptTools.load_filtered_results(results_file, config) epochs, total_epochs = HyperoptTools.load_filtered_results(results_file, config)
if print_colorized:
colorama_init(autoreset=True)
if not export_csv: if not export_csv:
try: try:
print( h_out = HyperoptOutput()
HyperoptTools.get_result_table( h_out.add_data(
config, config,
epochs, epochs,
total_epochs, total_epochs,
not config.get("hyperopt_list_best", False), not config.get("hyperopt_list_best", False),
print_colorized,
0,
)
) )
h_out.print(print_colorized=print_colorized)
except KeyboardInterrupt: except KeyboardInterrupt:
print("User interrupted..") print("User interrupted..")

View File

@ -4,9 +4,9 @@ import sys
from typing import Any, Dict, List, Union from typing import Any, Dict, List, Union
import rapidjson import rapidjson
from colorama import Fore, Style from rich.console import Console
from colorama import init as colorama_init from rich.table import Table
from tabulate import tabulate from rich.text import Text
from freqtrade.configuration import setup_utils_configuration from freqtrade.configuration import setup_utils_configuration
from freqtrade.enums import RunMode from freqtrade.enums import RunMode
@ -14,7 +14,8 @@ from freqtrade.exceptions import ConfigurationError, OperationalException
from freqtrade.exchange import list_available_exchanges, market_is_active from freqtrade.exchange import list_available_exchanges, market_is_active
from freqtrade.misc import parse_db_uri_for_logging, plural from freqtrade.misc import parse_db_uri_for_logging, plural
from freqtrade.resolvers import ExchangeResolver, StrategyResolver from freqtrade.resolvers import ExchangeResolver, StrategyResolver
from freqtrade.types import ValidExchangesType from freqtrade.types.valid_exchanges_type import ValidExchangesType
from freqtrade.util import print_rich_table
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -26,72 +27,69 @@ def start_list_exchanges(args: Dict[str, Any]) -> None:
:param args: Cli args from Arguments() :param args: Cli args from Arguments()
:return: None :return: None
""" """
exchanges = list_available_exchanges(args["list_exchanges_all"]) available_exchanges: List[ValidExchangesType] = list_available_exchanges(
args["list_exchanges_all"]
)
if args["print_one_column"]: if args["print_one_column"]:
print("\n".join([e["name"] for e in exchanges])) print("\n".join([e["name"] for e in available_exchanges]))
else: else:
headers = { if args["list_exchanges_all"]:
"name": "Exchange name", title = (
"supported": "Supported", f"All exchanges supported by the ccxt library "
"trade_modes": "Markets", f"({len(available_exchanges)} exchanges):"
"comment": "Reason", )
} else:
headers.update({"valid": "Valid"} if args["list_exchanges_all"] else {}) available_exchanges = [e for e in available_exchanges if e["valid"] is not False]
title = f"Exchanges available for Freqtrade ({len(available_exchanges)} exchanges):"
def build_entry(exchange: ValidExchangesType, valid: bool): table = Table(title=title)
valid_entry = {"valid": exchange["valid"]} if valid else {}
result: Dict[str, Union[str, bool]] = { table.add_column("Exchange Name")
"name": exchange["name"], table.add_column("Markets")
**valid_entry, table.add_column("Reason")
"supported": "Official" if exchange["supported"] else "",
"trade_modes": ("DEX: " if exchange["dex"] else "") for exchange in available_exchanges:
+ ", ".join( name = Text(exchange["name"])
(f"{a['margin_mode']} " if a["margin_mode"] else "") + a["trading_mode"] if exchange["supported"]:
name.append(" (Official)", style="italic")
name.stylize("green bold")
trade_modes = Text(
", ".join(
(f"{a.get('margin_mode', '')} {a['trading_mode']}").lstrip()
for a in exchange["trade_modes"] for a in exchange["trade_modes"]
), ),
"comment": exchange["comment"], style="",
}
return result
if args["list_exchanges_all"]:
exchanges = [build_entry(e, True) for e in exchanges]
print(f"All exchanges supported by the ccxt library ({len(exchanges)} exchanges):")
else:
exchanges = [build_entry(e, False) for e in exchanges if e["valid"] is not False]
print(f"Exchanges available for Freqtrade ({len(exchanges)} exchanges):")
print(
tabulate(
exchanges,
headers=headers,
) )
) if exchange["dex"]:
trade_modes = Text("DEX: ") + trade_modes
trade_modes.stylize("bold", 0, 3)
table.add_row(
name,
trade_modes,
exchange["comment"],
style=None if exchange["valid"] else "red",
)
# table.add_row(*[exchange[header] for header in headers])
console = Console()
console.print(table)
def _print_objs_tabular(objs: List, print_colorized: bool) -> None: def _print_objs_tabular(objs: List, print_colorized: bool) -> None:
if print_colorized:
colorama_init(autoreset=True)
red = Fore.RED
yellow = Fore.YELLOW
reset = Style.RESET_ALL
else:
red = ""
yellow = ""
reset = ""
names = [s["name"] for s in objs] names = [s["name"] for s in objs]
objs_to_print = [ objs_to_print: List[Dict[str, Union[Text, str]]] = [
{ {
"name": s["name"] if s["name"] else "--", "name": Text(s["name"] if s["name"] else "--"),
"location": s["location_rel"], "location": s["location_rel"],
"status": ( "status": (
red + "LOAD FAILED" + reset Text("LOAD FAILED", style="bold red")
if s["class"] is None if s["class"] is None
else "OK" else Text("OK", style="bold green")
if names.count(s["name"]) == 1 if names.count(s["name"]) == 1
else yellow + "DUPLICATE NAME" + reset else Text("DUPLICATE NAME", style="bold yellow")
), ),
} }
for s in objs for s in objs
@ -101,11 +99,23 @@ def _print_objs_tabular(objs: List, print_colorized: bool) -> None:
objs_to_print[idx].update( objs_to_print[idx].update(
{ {
"hyperoptable": "Yes" if s["hyperoptable"]["count"] > 0 else "No", "hyperoptable": "Yes" if s["hyperoptable"]["count"] > 0 else "No",
"buy-Params": len(s["hyperoptable"].get("buy", [])), "buy-Params": str(len(s["hyperoptable"].get("buy", []))),
"sell-Params": len(s["hyperoptable"].get("sell", [])), "sell-Params": str(len(s["hyperoptable"].get("sell", []))),
} }
) )
print(tabulate(objs_to_print, headers="keys", tablefmt="psql", stralign="right")) table = Table()
for header in objs_to_print[0].keys():
table.add_column(header.capitalize(), justify="right")
for row in objs_to_print:
table.add_row(*[row[header] for header in objs_to_print[0].keys()])
console = Console(
color_system="auto" if print_colorized else None,
width=200 if "pytest" in sys.modules else None,
)
console.print(table)
def start_list_strategies(args: Dict[str, Any]) -> None: def start_list_strategies(args: Dict[str, Any]) -> None:
@ -270,9 +280,7 @@ def start_list_markets(args: Dict[str, Any], pairs_only: bool = False) -> None:
writer.writeheader() writer.writeheader()
writer.writerows(tabular_data) writer.writerows(tabular_data)
else: else:
# print data as a table, with the human-readable summary print_rich_table(tabular_data, headers, summary_str)
print(f"{summary_str}:")
print(tabulate(tabular_data, headers="keys", tablefmt="psql", stralign="right"))
elif not ( elif not (
args.get("print_one_column", False) args.get("print_one_column", False)
or args.get("list_pairs_print_json", False) or args.get("list_pairs_print_json", False)

File diff suppressed because it is too large Load Diff

View File

@ -14,9 +14,13 @@ def sanitize_config(config: Config, *, show_sensitive: bool = False) -> Config:
return config return config
keys_to_remove = [ keys_to_remove = [
"exchange.key", "exchange.key",
"exchange.apiKey",
"exchange.secret", "exchange.secret",
"exchange.password", "exchange.password",
"exchange.uid", "exchange.uid",
"exchange.accountId",
"exchange.walletAddress",
"exchange.privateKey",
"telegram.token", "telegram.token",
"telegram.chat_id", "telegram.chat_id",
"discord.webhook_url", "discord.webhook_url",

View File

@ -6,8 +6,16 @@ from typing import Any, Dict
from jsonschema import Draft4Validator, validators from jsonschema import Draft4Validator, validators
from jsonschema.exceptions import ValidationError, best_match from jsonschema.exceptions import ValidationError, best_match
from freqtrade import constants from freqtrade.configuration.config_schema import (
CONF_SCHEMA,
SCHEMA_BACKTEST_REQUIRED,
SCHEMA_BACKTEST_REQUIRED_FINAL,
SCHEMA_MINIMAL_REQUIRED,
SCHEMA_MINIMAL_WEBSERVER,
SCHEMA_TRADE_REQUIRED,
)
from freqtrade.configuration.deprecated_settings import process_deprecated_setting from freqtrade.configuration.deprecated_settings import process_deprecated_setting
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
from freqtrade.enums import RunMode, TradingMode from freqtrade.enums import RunMode, TradingMode
from freqtrade.exceptions import ConfigurationError from freqtrade.exceptions import ConfigurationError
@ -41,18 +49,18 @@ def validate_config_schema(conf: Dict[str, Any], preliminary: bool = False) -> D
:param conf: Config in JSON format :param conf: Config in JSON format
:return: Returns the config if valid, otherwise throw an exception :return: Returns the config if valid, otherwise throw an exception
""" """
conf_schema = deepcopy(constants.CONF_SCHEMA) conf_schema = deepcopy(CONF_SCHEMA)
if conf.get("runmode", RunMode.OTHER) in (RunMode.DRY_RUN, RunMode.LIVE): if conf.get("runmode", RunMode.OTHER) in (RunMode.DRY_RUN, RunMode.LIVE):
conf_schema["required"] = constants.SCHEMA_TRADE_REQUIRED conf_schema["required"] = SCHEMA_TRADE_REQUIRED
elif conf.get("runmode", RunMode.OTHER) in (RunMode.BACKTEST, RunMode.HYPEROPT): elif conf.get("runmode", RunMode.OTHER) in (RunMode.BACKTEST, RunMode.HYPEROPT):
if preliminary: if preliminary:
conf_schema["required"] = constants.SCHEMA_BACKTEST_REQUIRED conf_schema["required"] = SCHEMA_BACKTEST_REQUIRED
else: else:
conf_schema["required"] = constants.SCHEMA_BACKTEST_REQUIRED_FINAL conf_schema["required"] = SCHEMA_BACKTEST_REQUIRED_FINAL
elif conf.get("runmode", RunMode.OTHER) == RunMode.WEBSERVER: elif conf.get("runmode", RunMode.OTHER) == RunMode.WEBSERVER:
conf_schema["required"] = constants.SCHEMA_MINIMAL_WEBSERVER conf_schema["required"] = SCHEMA_MINIMAL_WEBSERVER
else: else:
conf_schema["required"] = constants.SCHEMA_MINIMAL_REQUIRED conf_schema["required"] = SCHEMA_MINIMAL_REQUIRED
try: try:
FreqtradeValidator(conf_schema).validate(conf) FreqtradeValidator(conf_schema).validate(conf)
return conf return conf
@ -83,6 +91,7 @@ def validate_config_consistency(conf: Dict[str, Any], *, preliminary: bool = Fal
_validate_freqai_include_timeframes(conf, preliminary=preliminary) _validate_freqai_include_timeframes(conf, preliminary=preliminary)
_validate_consumers(conf) _validate_consumers(conf)
validate_migrated_strategy_settings(conf) validate_migrated_strategy_settings(conf)
_validate_orderflow(conf)
# validate configuration before returning # validate configuration before returning
logger.info("Validating configuration ...") logger.info("Validating configuration ...")
@ -97,7 +106,7 @@ def _validate_unlimited_amount(conf: Dict[str, Any]) -> None:
if ( if (
not conf.get("edge", {}).get("enabled") not conf.get("edge", {}).get("enabled")
and conf.get("max_open_trades") == float("inf") and conf.get("max_open_trades") == float("inf")
and conf.get("stake_amount") == constants.UNLIMITED_STAKE_AMOUNT and conf.get("stake_amount") == UNLIMITED_STAKE_AMOUNT
): ):
raise ConfigurationError("`max_open_trades` and `stake_amount` cannot both be unlimited.") raise ConfigurationError("`max_open_trades` and `stake_amount` cannot both be unlimited.")
@ -421,6 +430,14 @@ def _validate_consumers(conf: Dict[str, Any]) -> None:
) )
def _validate_orderflow(conf: Dict[str, Any]) -> None:
if conf.get("exchange", {}).get("use_public_trades"):
if "orderflow" not in conf:
raise ConfigurationError(
"Orderflow is a required configuration key when using public trades."
)
def _strategy_settings(conf: Dict[str, Any]) -> None: def _strategy_settings(conf: Dict[str, Any]) -> None:
process_deprecated_setting(conf, None, "use_sell_signal", None, "use_exit_signal") process_deprecated_setting(conf, None, "use_sell_signal", None, "use_exit_signal")
process_deprecated_setting(conf, None, "sell_profit_only", None, "exit_profit_only") process_deprecated_setting(conf, None, "sell_profit_only", None, "exit_profit_only")

View File

@ -4,9 +4,9 @@
bot constants bot constants
""" """
from typing import Any, Dict, List, Literal, Tuple from typing import Any, Dict, List, Literal, Optional, Tuple
from freqtrade.enums import CandleType, PriceType, RPCMessageType from freqtrade.enums import CandleType, PriceType
DOCS_LINK = "https://www.freqtrade.io/en/stable" DOCS_LINK = "https://www.freqtrade.io/en/stable"
@ -69,6 +69,7 @@ DEFAULT_DATAFRAME_COLUMNS = ["date", "open", "high", "low", "close", "volume"]
# Don't modify sequence of DEFAULT_TRADES_COLUMNS # Don't modify sequence of DEFAULT_TRADES_COLUMNS
# it has wide consequences for stored trades files # it has wide consequences for stored trades files
DEFAULT_TRADES_COLUMNS = ["timestamp", "id", "type", "side", "price", "amount", "cost"] DEFAULT_TRADES_COLUMNS = ["timestamp", "id", "type", "side", "price", "amount", "cost"]
DEFAULT_ORDERFLOW_COLUMNS = ["level", "bid", "ask", "delta"]
TRADES_DTYPES = { TRADES_DTYPES = {
"timestamp": "int64", "timestamp": "int64",
"id": "str", "id": "str",
@ -172,586 +173,6 @@ MINIMAL_CONFIG = {
}, },
} }
__MESSAGE_TYPE_DICT: Dict[str, Dict[str, str]] = {x: {"type": "object"} for x in RPCMessageType}
# Required json-schema for user specified config
CONF_SCHEMA = {
"type": "object",
"properties": {
"max_open_trades": {"type": ["integer", "number"], "minimum": -1},
"new_pairs_days": {"type": "integer", "default": 30},
"timeframe": {"type": "string"},
"stake_currency": {"type": "string"},
"stake_amount": {
"type": ["number", "string"],
"minimum": 0.0001,
"pattern": UNLIMITED_STAKE_AMOUNT,
},
"tradable_balance_ratio": {"type": "number", "minimum": 0.0, "maximum": 1, "default": 0.99},
"available_capital": {
"type": "number",
"minimum": 0,
},
"amend_last_stake_amount": {"type": "boolean", "default": False},
"last_stake_amount_min_ratio": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"default": 0.5,
},
"fiat_display_currency": {"type": "string", "enum": SUPPORTED_FIAT},
"dry_run": {"type": "boolean"},
"dry_run_wallet": {"type": "number", "default": DRY_RUN_WALLET},
"cancel_open_orders_on_exit": {"type": "boolean", "default": False},
"process_only_new_candles": {"type": "boolean"},
"minimal_roi": {
"type": "object",
"patternProperties": {"^[0-9.]+$": {"type": "number"}},
},
"amount_reserve_percent": {"type": "number", "minimum": 0.0, "maximum": 0.5},
"stoploss": {"type": "number", "maximum": 0, "exclusiveMaximum": True},
"trailing_stop": {"type": "boolean"},
"trailing_stop_positive": {"type": "number", "minimum": 0, "maximum": 1},
"trailing_stop_positive_offset": {"type": "number", "minimum": 0, "maximum": 1},
"trailing_only_offset_is_reached": {"type": "boolean"},
"use_exit_signal": {"type": "boolean"},
"exit_profit_only": {"type": "boolean"},
"exit_profit_offset": {"type": "number"},
"fee": {"type": "number", "minimum": 0, "maximum": 0.1},
"ignore_roi_if_entry_signal": {"type": "boolean"},
"ignore_buying_expired_candle_after": {"type": "number"},
"trading_mode": {"type": "string", "enum": TRADING_MODES},
"margin_mode": {"type": "string", "enum": MARGIN_MODES},
"reduce_df_footprint": {"type": "boolean", "default": False},
"minimum_trade_amount": {"type": "number", "default": 10},
"targeted_trade_amount": {"type": "number", "default": 20},
"lookahead_analysis_exportfilename": {"type": "string"},
"startup_candle": {
"type": "array",
"uniqueItems": True,
"default": [199, 399, 499, 999, 1999],
},
"liquidation_buffer": {"type": "number", "minimum": 0.0, "maximum": 0.99},
"backtest_breakdown": {
"type": "array",
"items": {"type": "string", "enum": BACKTEST_BREAKDOWNS},
},
"bot_name": {"type": "string"},
"unfilledtimeout": {
"type": "object",
"properties": {
"entry": {"type": "number", "minimum": 1},
"exit": {"type": "number", "minimum": 1},
"exit_timeout_count": {"type": "number", "minimum": 0, "default": 0},
"unit": {"type": "string", "enum": TIMEOUT_UNITS, "default": "minutes"},
},
},
"entry_pricing": {
"type": "object",
"properties": {
"price_last_balance": {
"type": "number",
"minimum": 0,
"maximum": 1,
"exclusiveMaximum": False,
},
"price_side": {"type": "string", "enum": PRICING_SIDES, "default": "same"},
"use_order_book": {"type": "boolean"},
"order_book_top": {
"type": "integer",
"minimum": 1,
"maximum": 50,
},
"check_depth_of_market": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"bids_to_ask_delta": {"type": "number", "minimum": 0},
},
},
},
"required": ["price_side"],
},
"exit_pricing": {
"type": "object",
"properties": {
"price_side": {"type": "string", "enum": PRICING_SIDES, "default": "same"},
"price_last_balance": {
"type": "number",
"minimum": 0,
"maximum": 1,
"exclusiveMaximum": False,
},
"use_order_book": {"type": "boolean"},
"order_book_top": {
"type": "integer",
"minimum": 1,
"maximum": 50,
},
},
"required": ["price_side"],
},
"custom_price_max_distance_ratio": {"type": "number", "minimum": 0.0},
"order_types": {
"type": "object",
"properties": {
"entry": {"type": "string", "enum": ORDERTYPE_POSSIBILITIES},
"exit": {"type": "string", "enum": ORDERTYPE_POSSIBILITIES},
"force_exit": {"type": "string", "enum": ORDERTYPE_POSSIBILITIES},
"force_entry": {"type": "string", "enum": ORDERTYPE_POSSIBILITIES},
"emergency_exit": {
"type": "string",
"enum": ORDERTYPE_POSSIBILITIES,
"default": "market",
},
"stoploss": {"type": "string", "enum": ORDERTYPE_POSSIBILITIES},
"stoploss_on_exchange": {"type": "boolean"},
"stoploss_price_type": {"type": "string", "enum": STOPLOSS_PRICE_TYPES},
"stoploss_on_exchange_interval": {"type": "number"},
"stoploss_on_exchange_limit_ratio": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
},
},
"required": ["entry", "exit", "stoploss", "stoploss_on_exchange"],
},
"order_time_in_force": {
"type": "object",
"properties": {
"entry": {"type": "string", "enum": ORDERTIF_POSSIBILITIES},
"exit": {"type": "string", "enum": ORDERTIF_POSSIBILITIES},
},
"required": REQUIRED_ORDERTIF,
},
"coingecko": {
"type": "object",
"properties": {
"is_demo": {"type": "boolean", "default": True},
"api_key": {"type": "string"},
},
"required": ["is_demo", "api_key"],
},
"exchange": {"$ref": "#/definitions/exchange"},
"edge": {"$ref": "#/definitions/edge"},
"freqai": {"$ref": "#/definitions/freqai"},
"external_message_consumer": {"$ref": "#/definitions/external_message_consumer"},
"experimental": {
"type": "object",
"properties": {"block_bad_exchanges": {"type": "boolean"}},
},
"pairlists": {
"type": "array",
"items": {
"type": "object",
"properties": {
"method": {"type": "string", "enum": AVAILABLE_PAIRLISTS},
},
"required": ["method"],
},
},
"protections": {
"type": "array",
"items": {
"type": "object",
"properties": {
"method": {"type": "string", "enum": AVAILABLE_PROTECTIONS},
"stop_duration": {"type": "number", "minimum": 0.0},
"stop_duration_candles": {"type": "number", "minimum": 0},
"trade_limit": {"type": "number", "minimum": 1},
"lookback_period": {"type": "number", "minimum": 1},
"lookback_period_candles": {"type": "number", "minimum": 1},
},
"required": ["method"],
},
},
"telegram": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"token": {"type": "string"},
"chat_id": {"type": "string"},
"allow_custom_messages": {"type": "boolean", "default": True},
"balance_dust_level": {"type": "number", "minimum": 0.0},
"notification_settings": {
"type": "object",
"default": {},
"properties": {
"status": {"type": "string", "enum": TELEGRAM_SETTING_OPTIONS},
"warning": {"type": "string", "enum": TELEGRAM_SETTING_OPTIONS},
"startup": {"type": "string", "enum": TELEGRAM_SETTING_OPTIONS},
"entry": {"type": "string", "enum": TELEGRAM_SETTING_OPTIONS},
"entry_fill": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
"default": "off",
},
"entry_cancel": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
},
"exit": {
"type": ["string", "object"],
"additionalProperties": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
},
},
"exit_fill": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
"default": "on",
},
"exit_cancel": {"type": "string", "enum": TELEGRAM_SETTING_OPTIONS},
"protection_trigger": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
"default": "on",
},
"protection_trigger_global": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
"default": "on",
},
"show_candle": {
"type": "string",
"enum": ["off", "ohlc"],
"default": "off",
},
"strategy_msg": {
"type": "string",
"enum": TELEGRAM_SETTING_OPTIONS,
"default": "on",
},
},
},
"reload": {"type": "boolean"},
},
"required": ["enabled", "token", "chat_id"],
},
"webhook": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"url": {"type": "string"},
"format": {"type": "string", "enum": WEBHOOK_FORMAT_OPTIONS, "default": "form"},
"retries": {"type": "integer", "minimum": 0},
"retry_delay": {"type": "number", "minimum": 0},
**__MESSAGE_TYPE_DICT,
# **{x: {'type': 'object'} for x in RPCMessageType},
# Below -> Deprecated
"webhookentry": {"type": "object"},
"webhookentrycancel": {"type": "object"},
"webhookentryfill": {"type": "object"},
"webhookexit": {"type": "object"},
"webhookexitcancel": {"type": "object"},
"webhookexitfill": {"type": "object"},
"webhookstatus": {"type": "object"},
},
},
"discord": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"webhook_url": {"type": "string"},
"exit_fill": {
"type": "array",
"items": {"type": "object"},
"default": [
{"Trade ID": "{trade_id}"},
{"Exchange": "{exchange}"},
{"Pair": "{pair}"},
{"Direction": "{direction}"},
{"Open rate": "{open_rate}"},
{"Close rate": "{close_rate}"},
{"Amount": "{amount}"},
{"Open date": "{open_date:%Y-%m-%d %H:%M:%S}"},
{"Close date": "{close_date:%Y-%m-%d %H:%M:%S}"},
{"Profit": "{profit_amount} {stake_currency}"},
{"Profitability": "{profit_ratio:.2%}"},
{"Enter tag": "{enter_tag}"},
{"Exit Reason": "{exit_reason}"},
{"Strategy": "{strategy}"},
{"Timeframe": "{timeframe}"},
],
},
"entry_fill": {
"type": "array",
"items": {"type": "object"},
"default": [
{"Trade ID": "{trade_id}"},
{"Exchange": "{exchange}"},
{"Pair": "{pair}"},
{"Direction": "{direction}"},
{"Open rate": "{open_rate}"},
{"Amount": "{amount}"},
{"Open date": "{open_date:%Y-%m-%d %H:%M:%S}"},
{"Enter tag": "{enter_tag}"},
{"Strategy": "{strategy} {timeframe}"},
],
},
},
},
"api_server": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"listen_ip_address": {"format": "ipv4"},
"listen_port": {"type": "integer", "minimum": 1024, "maximum": 65535},
"username": {"type": "string"},
"password": {"type": "string"},
"ws_token": {"type": ["string", "array"], "items": {"type": "string"}},
"jwt_secret_key": {"type": "string"},
"CORS_origins": {"type": "array", "items": {"type": "string"}},
"verbosity": {"type": "string", "enum": ["error", "info"]},
},
"required": ["enabled", "listen_ip_address", "listen_port", "username", "password"],
},
"db_url": {"type": "string"},
"export": {"type": "string", "enum": EXPORT_OPTIONS, "default": "trades"},
"disableparamexport": {"type": "boolean"},
"initial_state": {"type": "string", "enum": ["running", "stopped"]},
"force_entry_enable": {"type": "boolean"},
"disable_dataframe_checks": {"type": "boolean"},
"internals": {
"type": "object",
"default": {},
"properties": {
"process_throttle_secs": {"type": "integer"},
"interval": {"type": "integer"},
"sd_notify": {"type": "boolean"},
},
},
"dataformat_ohlcv": {
"type": "string",
"enum": AVAILABLE_DATAHANDLERS,
"default": "feather",
},
"dataformat_trades": {
"type": "string",
"enum": AVAILABLE_DATAHANDLERS,
"default": "feather",
},
"position_adjustment_enable": {"type": "boolean"},
"max_entry_position_adjustment": {"type": ["integer", "number"], "minimum": -1},
},
"definitions": {
"exchange": {
"type": "object",
"properties": {
"name": {"type": "string"},
"enable_ws": {"type": "boolean", "default": True},
"key": {"type": "string", "default": ""},
"secret": {"type": "string", "default": ""},
"password": {"type": "string", "default": ""},
"uid": {"type": "string"},
"pair_whitelist": {
"type": "array",
"items": {
"type": "string",
},
"uniqueItems": True,
},
"pair_blacklist": {
"type": "array",
"items": {
"type": "string",
},
"uniqueItems": True,
},
"unknown_fee_rate": {"type": "number"},
"outdated_offset": {"type": "integer", "minimum": 1},
"markets_refresh_interval": {"type": "integer"},
"ccxt_config": {"type": "object"},
"ccxt_async_config": {"type": "object"},
},
"required": ["name"],
},
"edge": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"process_throttle_secs": {"type": "integer", "minimum": 600},
"calculate_since_number_of_days": {"type": "integer"},
"allowed_risk": {"type": "number"},
"stoploss_range_min": {"type": "number"},
"stoploss_range_max": {"type": "number"},
"stoploss_range_step": {"type": "number"},
"minimum_winrate": {"type": "number"},
"minimum_expectancy": {"type": "number"},
"min_trade_number": {"type": "number"},
"max_trade_duration_minute": {"type": "integer"},
"remove_pumps": {"type": "boolean"},
},
"required": ["process_throttle_secs", "allowed_risk"],
},
"external_message_consumer": {
"type": "object",
"properties": {
"enabled": {"type": "boolean", "default": False},
"producers": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"host": {"type": "string"},
"port": {
"type": "integer",
"default": 8080,
"minimum": 0,
"maximum": 65535,
},
"secure": {"type": "boolean", "default": False},
"ws_token": {"type": "string"},
},
"required": ["name", "host", "ws_token"],
},
},
"wait_timeout": {"type": "integer", "minimum": 0},
"sleep_time": {"type": "integer", "minimum": 0},
"ping_timeout": {"type": "integer", "minimum": 0},
"remove_entry_exit_signals": {"type": "boolean", "default": False},
"initial_candle_limit": {
"type": "integer",
"minimum": 0,
"maximum": 1500,
"default": 1500,
},
"message_size_limit": { # In megabytes
"type": "integer",
"minimum": 1,
"maximum": 20,
"default": 8,
},
},
"required": ["producers"],
},
"freqai": {
"type": "object",
"properties": {
"enabled": {"type": "boolean", "default": False},
"keras": {"type": "boolean", "default": False},
"write_metrics_to_disk": {"type": "boolean", "default": False},
"purge_old_models": {"type": ["boolean", "number"], "default": 2},
"conv_width": {"type": "integer", "default": 1},
"train_period_days": {"type": "integer", "default": 0},
"backtest_period_days": {"type": "number", "default": 7},
"identifier": {"type": "string", "default": "example"},
"feature_parameters": {
"type": "object",
"properties": {
"include_corr_pairlist": {"type": "array"},
"include_timeframes": {"type": "array"},
"label_period_candles": {"type": "integer"},
"include_shifted_candles": {"type": "integer", "default": 0},
"DI_threshold": {"type": "number", "default": 0},
"weight_factor": {"type": "number", "default": 0},
"principal_component_analysis": {"type": "boolean", "default": False},
"use_SVM_to_remove_outliers": {"type": "boolean", "default": False},
"plot_feature_importances": {"type": "integer", "default": 0},
"svm_params": {
"type": "object",
"properties": {
"shuffle": {"type": "boolean", "default": False},
"nu": {"type": "number", "default": 0.1},
},
},
"shuffle_after_split": {"type": "boolean", "default": False},
"buffer_train_data_candles": {"type": "integer", "default": 0},
},
"required": [
"include_timeframes",
"include_corr_pairlist",
],
},
"data_split_parameters": {
"type": "object",
"properties": {
"test_size": {"type": "number"},
"random_state": {"type": "integer"},
"shuffle": {"type": "boolean", "default": False},
},
},
"model_training_parameters": {"type": "object"},
"rl_config": {
"type": "object",
"properties": {
"drop_ohlc_from_features": {"type": "boolean", "default": False},
"train_cycles": {"type": "integer"},
"max_trade_duration_candles": {"type": "integer"},
"add_state_info": {"type": "boolean", "default": False},
"max_training_drawdown_pct": {"type": "number", "default": 0.02},
"cpu_count": {"type": "integer", "default": 1},
"model_type": {"type": "string", "default": "PPO"},
"policy_type": {"type": "string", "default": "MlpPolicy"},
"net_arch": {"type": "array", "default": [128, 128]},
"randomize_starting_position": {"type": "boolean", "default": False},
"progress_bar": {"type": "boolean", "default": True},
"model_reward_parameters": {
"type": "object",
"properties": {
"rr": {"type": "number", "default": 1},
"profit_aim": {"type": "number", "default": 0.025},
},
},
},
},
},
"required": [
"enabled",
"train_period_days",
"backtest_period_days",
"identifier",
"feature_parameters",
"data_split_parameters",
],
},
},
}
SCHEMA_TRADE_REQUIRED = [
"exchange",
"timeframe",
"max_open_trades",
"stake_currency",
"stake_amount",
"tradable_balance_ratio",
"last_stake_amount_min_ratio",
"dry_run",
"dry_run_wallet",
"exit_pricing",
"entry_pricing",
"stoploss",
"minimal_roi",
"internals",
"dataformat_ohlcv",
"dataformat_trades",
]
SCHEMA_BACKTEST_REQUIRED = [
"exchange",
"stake_currency",
"stake_amount",
"dry_run_wallet",
"dataformat_ohlcv",
"dataformat_trades",
]
SCHEMA_BACKTEST_REQUIRED_FINAL = SCHEMA_BACKTEST_REQUIRED + [
"stoploss",
"minimal_roi",
"max_open_trades",
]
SCHEMA_MINIMAL_REQUIRED = [
"exchange",
"dry_run",
"dataformat_ohlcv",
"dataformat_trades",
]
SCHEMA_MINIMAL_WEBSERVER = SCHEMA_MINIMAL_REQUIRED + [
"api_server",
]
CANCEL_REASON = { CANCEL_REASON = {
"TIMEOUT": "cancelled due to timeout", "TIMEOUT": "cancelled due to timeout",
@ -772,6 +193,9 @@ ListPairsWithTimeframes = List[PairWithTimeframe]
# Type for trades list # Type for trades list
TradeList = List[List] TradeList = List[List]
# ticks, pair, timeframe, CandleType
TickWithTimeframe = Tuple[str, str, CandleType, Optional[int], Optional[int]]
ListTicksWithTimeframes = List[TickWithTimeframe]
LongShort = Literal["long", "short"] LongShort = Literal["long", "short"]
EntryExit = Literal["entry", "exit"] EntryExit = Literal["entry", "exit"]

View File

@ -185,7 +185,7 @@ def load_and_merge_backtest_result(strategy_name: str, filename: Path, results:
""" """
bt_data = load_backtest_stats(filename) bt_data = load_backtest_stats(filename)
k: Literal["metadata", "strategy"] k: Literal["metadata", "strategy"]
for k in ("metadata", "strategy"): # type: ignore for k in ("metadata", "strategy"):
results[k][strategy_name] = bt_data[k][strategy_name] results[k][strategy_name] = bt_data[k][strategy_name]
results["metadata"][strategy_name]["filename"] = filename.stem results["metadata"][strategy_name]["filename"] = filename.stem
comparison = bt_data["strategy_comparison"] comparison = bt_data["strategy_comparison"]

View File

@ -8,6 +8,7 @@ from freqtrade.data.converter.converter import (
trim_dataframe, trim_dataframe,
trim_dataframes, trim_dataframes,
) )
from freqtrade.data.converter.orderflow import populate_dataframe_with_trades
from freqtrade.data.converter.trade_converter import ( from freqtrade.data.converter.trade_converter import (
convert_trades_format, convert_trades_format,
convert_trades_to_ohlcv, convert_trades_to_ohlcv,
@ -30,6 +31,7 @@ __all__ = [
"trim_dataframes", "trim_dataframes",
"convert_trades_format", "convert_trades_format",
"convert_trades_to_ohlcv", "convert_trades_to_ohlcv",
"populate_dataframe_with_trades",
"trades_convert_types", "trades_convert_types",
"trades_df_remove_duplicates", "trades_df_remove_duplicates",
"trades_dict_to_list", "trades_dict_to_list",

View File

@ -0,0 +1,295 @@
"""
Functions to convert orderflow data from public_trades
"""
import logging
import time
import typing
from collections import OrderedDict
from datetime import datetime
from typing import Tuple
import numpy as np
import pandas as pd
from freqtrade.constants import DEFAULT_ORDERFLOW_COLUMNS
from freqtrade.enums import RunMode
from freqtrade.exceptions import DependencyException
logger = logging.getLogger(__name__)
def _init_dataframe_with_trades_columns(dataframe: pd.DataFrame):
"""
Populates a dataframe with trades columns
:param dataframe: Dataframe to populate
"""
# Initialize columns with appropriate dtypes
dataframe["trades"] = np.nan
dataframe["orderflow"] = np.nan
dataframe["imbalances"] = np.nan
dataframe["stacked_imbalances_bid"] = np.nan
dataframe["stacked_imbalances_ask"] = np.nan
dataframe["max_delta"] = np.nan
dataframe["min_delta"] = np.nan
dataframe["bid"] = np.nan
dataframe["ask"] = np.nan
dataframe["delta"] = np.nan
dataframe["total_trades"] = np.nan
# Ensure the 'trades' column is of object type
dataframe["trades"] = dataframe["trades"].astype(object)
dataframe["orderflow"] = dataframe["orderflow"].astype(object)
dataframe["imbalances"] = dataframe["imbalances"].astype(object)
dataframe["stacked_imbalances_bid"] = dataframe["stacked_imbalances_bid"].astype(object)
dataframe["stacked_imbalances_ask"] = dataframe["stacked_imbalances_ask"].astype(object)
def _calculate_ohlcv_candle_start_and_end(df: pd.DataFrame, timeframe: str):
from freqtrade.exchange import timeframe_to_next_date, timeframe_to_resample_freq
timeframe_frequency = timeframe_to_resample_freq(timeframe)
# calculate ohlcv candle start and end
if df is not None and not df.empty:
df["datetime"] = pd.to_datetime(df["date"], unit="ms")
df["candle_start"] = df["datetime"].dt.floor(timeframe_frequency)
# used in _now_is_time_to_refresh_trades
df["candle_end"] = df["candle_start"].apply(
lambda candle_start: timeframe_to_next_date(timeframe, candle_start)
)
df.drop(columns=["datetime"], inplace=True)
def populate_dataframe_with_trades(
cached_grouped_trades: OrderedDict[Tuple[datetime, datetime], pd.DataFrame],
config,
dataframe: pd.DataFrame,
trades: pd.DataFrame,
) -> Tuple[pd.DataFrame, OrderedDict[Tuple[datetime, datetime], pd.DataFrame]]:
"""
Populates a dataframe with trades
:param dataframe: Dataframe to populate
:param trades: Trades to populate with
:return: Dataframe with trades populated
"""
timeframe = config["timeframe"]
config_orderflow = config["orderflow"]
# create columns for trades
_init_dataframe_with_trades_columns(dataframe)
try:
start_time = time.time()
# calculate ohlcv candle start and end
_calculate_ohlcv_candle_start_and_end(trades, timeframe)
# get date of earliest max_candles candle
max_candles = config_orderflow["max_candles"]
start_date = dataframe.tail(max_candles).date.iat[0]
# slice of trades that are before current ohlcv candles to make groupby faster
trades = trades.loc[trades.candle_start >= start_date]
trades.reset_index(inplace=True, drop=True)
# group trades by candle start
trades_grouped_by_candle_start = trades.groupby("candle_start", group_keys=False)
# Create Series to hold complex data
trades_series = pd.Series(index=dataframe.index, dtype=object)
orderflow_series = pd.Series(index=dataframe.index, dtype=object)
imbalances_series = pd.Series(index=dataframe.index, dtype=object)
stacked_imbalances_bid_series = pd.Series(index=dataframe.index, dtype=object)
stacked_imbalances_ask_series = pd.Series(index=dataframe.index, dtype=object)
trades_grouped_by_candle_start = trades.groupby("candle_start", group_keys=False)
for candle_start, trades_grouped_df in trades_grouped_by_candle_start:
is_between = candle_start == dataframe["date"]
if is_between.any():
from freqtrade.exchange import timeframe_to_next_date
candle_next = timeframe_to_next_date(timeframe, typing.cast(datetime, candle_start))
if candle_next not in trades_grouped_by_candle_start.groups:
logger.warning(
f"candle at {candle_start} with {len(trades_grouped_df)} trades "
f"might be unfinished, because no finished trades at {candle_next}"
)
indices = dataframe.index[is_between].tolist()
# Add trades to each candle
trades_series.loc[indices] = [
trades_grouped_df.drop(columns=["candle_start", "candle_end"]).to_dict(
orient="records"
)
]
# Use caching mechanism
if (candle_start, candle_next) in cached_grouped_trades:
cache_entry = cached_grouped_trades[
(typing.cast(datetime, candle_start), candle_next)
]
# dataframe.loc[is_between] = cache_entry # doesn't take, so we need workaround:
# Create a dictionary of the column values to be assigned
update_dict = {c: cache_entry[c].iat[0] for c in cache_entry.columns}
# Assign the values using the update_dict
dataframe.loc[is_between, update_dict.keys()] = pd.DataFrame(
[update_dict], index=dataframe.loc[is_between].index
)
continue
# Calculate orderflow for each candle
orderflow = trades_to_volumeprofile_with_total_delta_bid_ask(
trades_grouped_df, scale=config_orderflow["scale"]
)
orderflow_series.loc[indices] = [orderflow.to_dict(orient="index")]
# Calculate imbalances for each candle's orderflow
imbalances = trades_orderflow_to_imbalances(
orderflow,
imbalance_ratio=config_orderflow["imbalance_ratio"],
imbalance_volume=config_orderflow["imbalance_volume"],
)
imbalances_series.loc[indices] = [imbalances.to_dict(orient="index")]
stacked_imbalance_range = config_orderflow["stacked_imbalance_range"]
stacked_imbalances_bid_series.loc[indices] = [
stacked_imbalance_bid(
imbalances, stacked_imbalance_range=stacked_imbalance_range
)
]
stacked_imbalances_ask_series.loc[indices] = [
stacked_imbalance_ask(
imbalances, stacked_imbalance_range=stacked_imbalance_range
)
]
bid = np.where(
trades_grouped_df["side"].str.contains("sell"), trades_grouped_df["amount"], 0
)
ask = np.where(
trades_grouped_df["side"].str.contains("buy"), trades_grouped_df["amount"], 0
)
deltas_per_trade = ask - bid
min_delta = deltas_per_trade.cumsum().min()
max_delta = deltas_per_trade.cumsum().max()
dataframe.loc[indices, "max_delta"] = max_delta
dataframe.loc[indices, "min_delta"] = min_delta
dataframe.loc[indices, "bid"] = bid.sum()
dataframe.loc[indices, "ask"] = ask.sum()
dataframe.loc[indices, "delta"] = (
dataframe.loc[indices, "ask"] - dataframe.loc[indices, "bid"]
)
dataframe.loc[indices, "total_trades"] = len(trades_grouped_df)
# Cache the result
cached_grouped_trades[(typing.cast(datetime, candle_start), candle_next)] = (
dataframe.loc[is_between].copy()
)
# Maintain cache size
if (
config.get("runmode") in (RunMode.DRY_RUN, RunMode.LIVE)
and len(cached_grouped_trades) > config_orderflow["cache_size"]
):
cached_grouped_trades.popitem(last=False)
else:
logger.debug(f"Found NO candles for trades starting with {candle_start}")
logger.debug(f"trades.groups_keys in {time.time() - start_time} seconds")
# Merge the complex data Series back into the DataFrame
dataframe["trades"] = trades_series
dataframe["orderflow"] = orderflow_series
dataframe["imbalances"] = imbalances_series
dataframe["stacked_imbalances_bid"] = stacked_imbalances_bid_series
dataframe["stacked_imbalances_ask"] = stacked_imbalances_ask_series
except Exception as e:
logger.exception("Error populating dataframe with trades")
raise DependencyException(e)
return dataframe, cached_grouped_trades
def trades_to_volumeprofile_with_total_delta_bid_ask(
trades: pd.DataFrame, scale: float
) -> pd.DataFrame:
"""
:param trades: dataframe
:param scale: scale aka bin size e.g. 0.5
:return: trades binned to levels according to scale aka orderflow
"""
df = pd.DataFrame([], columns=DEFAULT_ORDERFLOW_COLUMNS)
# create bid, ask where side is sell or buy
df["bid_amount"] = np.where(trades["side"].str.contains("sell"), trades["amount"], 0)
df["ask_amount"] = np.where(trades["side"].str.contains("buy"), trades["amount"], 0)
df["bid"] = np.where(trades["side"].str.contains("sell"), 1, 0)
df["ask"] = np.where(trades["side"].str.contains("buy"), 1, 0)
# round the prices to the nearest multiple of the scale
df["price"] = ((trades["price"] / scale).round() * scale).astype("float64").values
if df.empty:
df["total"] = np.nan
df["delta"] = np.nan
return df
df["delta"] = df["ask_amount"] - df["bid_amount"]
df["total_volume"] = df["ask_amount"] + df["bid_amount"]
df["total_trades"] = df["ask"] + df["bid"]
# group to bins aka apply scale
df = df.groupby("price").sum(numeric_only=True)
return df
def trades_orderflow_to_imbalances(df: pd.DataFrame, imbalance_ratio: int, imbalance_volume: int):
"""
:param df: dataframes with bid and ask
:param imbalance_ratio: imbalance_ratio e.g. 3
:param imbalance_volume: imbalance volume e.g. 10
:return: dataframe with bid and ask imbalance
"""
bid = df.bid
# compares bid and ask diagonally
ask = df.ask.shift(-1)
bid_imbalance = (bid / ask) > (imbalance_ratio)
# overwrite bid_imbalance with False if volume is not big enough
bid_imbalance_filtered = np.where(df.total_volume < imbalance_volume, False, bid_imbalance)
ask_imbalance = (ask / bid) > (imbalance_ratio)
# overwrite ask_imbalance with False if volume is not big enough
ask_imbalance_filtered = np.where(df.total_volume < imbalance_volume, False, ask_imbalance)
dataframe = pd.DataFrame(
{"bid_imbalance": bid_imbalance_filtered, "ask_imbalance": ask_imbalance_filtered},
index=df.index,
)
return dataframe
def stacked_imbalance(
df: pd.DataFrame, label: str, stacked_imbalance_range: int, should_reverse: bool
):
"""
y * (y.groupby((y != y.shift()).cumsum()).cumcount() + 1)
https://stackoverflow.com/questions/27626542/counting-consecutive-positive-values-in-python-pandas-array
"""
imbalance = df[f"{label}_imbalance"]
int_series = pd.Series(np.where(imbalance, 1, 0))
stacked = int_series * (
int_series.groupby((int_series != int_series.shift()).cumsum()).cumcount() + 1
)
max_stacked_imbalance_idx = stacked.index[stacked >= stacked_imbalance_range]
stacked_imbalance_price = np.nan
if not max_stacked_imbalance_idx.empty:
idx = (
max_stacked_imbalance_idx[0]
if not should_reverse
else np.flipud(max_stacked_imbalance_idx)[0]
)
stacked_imbalance_price = imbalance.index[idx]
return stacked_imbalance_price
def stacked_imbalance_ask(df: pd.DataFrame, stacked_imbalance_range: int):
return stacked_imbalance(df, "ask", stacked_imbalance_range, should_reverse=True)
def stacked_imbalance_bid(df: pd.DataFrame, stacked_imbalance_range: int):
return stacked_imbalance(df, "bid", stacked_imbalance_range, should_reverse=False)

View File

@ -19,8 +19,8 @@ from freqtrade.constants import (
ListPairsWithTimeframes, ListPairsWithTimeframes,
PairWithTimeframe, PairWithTimeframe,
) )
from freqtrade.data.history import load_pair_history from freqtrade.data.history import get_datahandler, load_pair_history
from freqtrade.enums import CandleType, RPCMessageType, RunMode from freqtrade.enums import CandleType, RPCMessageType, RunMode, TradingMode
from freqtrade.exceptions import ExchangeError, OperationalException from freqtrade.exceptions import ExchangeError, OperationalException
from freqtrade.exchange import Exchange, timeframe_to_prev_date, timeframe_to_seconds from freqtrade.exchange import Exchange, timeframe_to_prev_date, timeframe_to_seconds
from freqtrade.exchange.types import OrderBook from freqtrade.exchange.types import OrderBook
@ -445,7 +445,20 @@ class DataProvider:
if self._exchange is None: if self._exchange is None:
raise OperationalException(NO_EXCHANGE_EXCEPTION) raise OperationalException(NO_EXCHANGE_EXCEPTION)
final_pairs = (pairlist + helping_pairs) if helping_pairs else pairlist final_pairs = (pairlist + helping_pairs) if helping_pairs else pairlist
# refresh latest ohlcv data
self._exchange.refresh_latest_ohlcv(final_pairs) self._exchange.refresh_latest_ohlcv(final_pairs)
# refresh latest trades data
self.refresh_latest_trades(pairlist)
def refresh_latest_trades(self, pairlist: ListPairsWithTimeframes) -> None:
"""
Refresh latest trades data (if enabled in config)
"""
use_public_trades = self._config.get("exchange", {}).get("use_public_trades", False)
if use_public_trades:
if self._exchange:
self._exchange.refresh_latest_trades(pairlist)
@property @property
def available_pairs(self) -> ListPairsWithTimeframes: def available_pairs(self) -> ListPairsWithTimeframes:
@ -483,6 +496,45 @@ class DataProvider:
else: else:
return DataFrame() return DataFrame()
def trades(
self, pair: str, timeframe: Optional[str] = None, copy: bool = True, candle_type: str = ""
) -> DataFrame:
"""
Get candle (TRADES) data for the given pair as DataFrame
Please use the `available_pairs` method to verify which pairs are currently cached.
This is not meant to be used in callbacks because of lookahead bias.
:param pair: pair to get the data for
:param timeframe: Timeframe to get data for
:param candle_type: '', mark, index, premiumIndex, or funding_rate
:param copy: copy dataframe before returning if True.
Use False only for read-only operations (where the dataframe is not modified)
"""
if self.runmode in (RunMode.DRY_RUN, RunMode.LIVE):
if self._exchange is None:
raise OperationalException(NO_EXCHANGE_EXCEPTION)
_candle_type = (
CandleType.from_string(candle_type)
if candle_type != ""
else self._config["candle_type_def"]
)
return self._exchange.trades(
(pair, timeframe or self._config["timeframe"], _candle_type), copy=copy
)
elif self.runmode in (RunMode.BACKTEST, RunMode.HYPEROPT):
_candle_type = (
CandleType.from_string(candle_type)
if candle_type != ""
else self._config["candle_type_def"]
)
data_handler = get_datahandler(
self._config["datadir"], data_format=self._config["dataformat_trades"]
)
trades_df = data_handler.trades_load(pair, TradingMode.FUTURES)
return trades_df
else:
return DataFrame()
def market(self, pair: str) -> Optional[Dict[str, Any]]: def market(self, pair: str) -> Optional[Dict[str, Any]]:
""" """
Return market data for the pair Return market data for the pair

View File

@ -4,7 +4,6 @@ from typing import List
import joblib import joblib
import pandas as pd import pandas as pd
from tabulate import tabulate
from freqtrade.configuration import TimeRange from freqtrade.configuration import TimeRange
from freqtrade.constants import Config from freqtrade.constants import Config
@ -14,6 +13,7 @@ from freqtrade.data.btanalysis import (
load_backtest_stats, load_backtest_stats,
) )
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.util import print_df_rich_table
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -307,7 +307,7 @@ def _print_table(
if name is not None: if name is not None:
print(name) print(name)
print(tabulate(data, headers="keys", tablefmt="psql", showindex=show_index)) print_df_rich_table(data, data.keys(), show_index=show_index)
def process_entry_exit_reasons(config: Config): def process_entry_exit_reasons(config: Config):

View File

@ -26,8 +26,7 @@ from freqtrade.enums import CandleType, TradingMode
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.exchange import Exchange from freqtrade.exchange import Exchange
from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist from freqtrade.plugins.pairlist.pairlist_helpers import dynamic_expand_pairlist
from freqtrade.util import dt_ts, format_ms_time from freqtrade.util import dt_now, dt_ts, format_ms_time, get_progress_tracker
from freqtrade.util.datetime_helpers import dt_now
from freqtrade.util.migrations import migrate_data from freqtrade.util.migrations import migrate_data
@ -155,11 +154,9 @@ def refresh_data(
:param candle_type: Any of the enum CandleType (must match trading mode!) :param candle_type: Any of the enum CandleType (must match trading mode!)
""" """
data_handler = get_datahandler(datadir, data_format) data_handler = get_datahandler(datadir, data_format)
for idx, pair in enumerate(pairs): for pair in pairs:
process = f"{idx}/{len(pairs)}"
_download_pair_history( _download_pair_history(
pair=pair, pair=pair,
process=process,
timeframe=timeframe, timeframe=timeframe,
datadir=datadir, datadir=datadir,
timerange=timerange, timerange=timerange,
@ -223,7 +220,6 @@ def _download_pair_history(
datadir: Path, datadir: Path,
exchange: Exchange, exchange: Exchange,
timeframe: str = "5m", timeframe: str = "5m",
process: str = "",
new_pairs_days: int = 30, new_pairs_days: int = 30,
data_handler: Optional[IDataHandler] = None, data_handler: Optional[IDataHandler] = None,
timerange: Optional[TimeRange] = None, timerange: Optional[TimeRange] = None,
@ -261,7 +257,7 @@ def _download_pair_history(
) )
logger.info( logger.info(
f'({process}) - Download history data for "{pair}", {timeframe}, ' f'Download history data for "{pair}", {timeframe}, '
f"{candle_type} and store in {datadir}. " f"{candle_type} and store in {datadir}. "
f'From {format_ms_time(since_ms) if since_ms else "start"} to ' f'From {format_ms_time(since_ms) if since_ms else "start"} to '
f'{format_ms_time(until_ms) if until_ms else "now"}' f'{format_ms_time(until_ms) if until_ms else "now"}'
@ -345,53 +341,65 @@ def refresh_backtest_ohlcv_data(
pairs_not_available = [] pairs_not_available = []
data_handler = get_datahandler(datadir, data_format) data_handler = get_datahandler(datadir, data_format)
candle_type = CandleType.get_default(trading_mode) candle_type = CandleType.get_default(trading_mode)
process = "" with get_progress_tracker() as progress:
for idx, pair in enumerate(pairs, start=1): tf_length = len(timeframes) if trading_mode != "futures" else len(timeframes) + 2
if pair not in exchange.markets: timeframe_task = progress.add_task("Timeframe", total=tf_length)
pairs_not_available.append(pair) pair_task = progress.add_task("Downloading data...", total=len(pairs))
logger.info(f"Skipping pair {pair}...")
continue
for timeframe in timeframes:
logger.debug(f"Downloading pair {pair}, {candle_type}, interval {timeframe}.")
process = f"{idx}/{len(pairs)}"
_download_pair_history(
pair=pair,
process=process,
datadir=datadir,
exchange=exchange,
timerange=timerange,
data_handler=data_handler,
timeframe=str(timeframe),
new_pairs_days=new_pairs_days,
candle_type=candle_type,
erase=erase,
prepend=prepend,
)
if trading_mode == "futures":
# Predefined candletype (and timeframe) depending on exchange
# Downloads what is necessary to backtest based on futures data.
tf_mark = exchange.get_option("mark_ohlcv_timeframe")
tf_funding_rate = exchange.get_option("funding_fee_timeframe")
fr_candle_type = CandleType.from_string(exchange.get_option("mark_ohlcv_price")) for pair in pairs:
# All exchanges need FundingRate for futures trading. progress.update(pair_task, description=f"Downloading {pair}")
# The timeframe is aligned to the mark-price timeframe. progress.update(timeframe_task, completed=0)
combs = ((CandleType.FUNDING_RATE, tf_funding_rate), (fr_candle_type, tf_mark))
for candle_type_f, tf in combs: if pair not in exchange.markets:
logger.debug(f"Downloading pair {pair}, {candle_type_f}, interval {tf}.") pairs_not_available.append(pair)
logger.info(f"Skipping pair {pair}...")
continue
for timeframe in timeframes:
progress.update(timeframe_task, description=f"Timeframe {timeframe}")
logger.debug(f"Downloading pair {pair}, {candle_type}, interval {timeframe}.")
_download_pair_history( _download_pair_history(
pair=pair, pair=pair,
process=process,
datadir=datadir, datadir=datadir,
exchange=exchange, exchange=exchange,
timerange=timerange, timerange=timerange,
data_handler=data_handler, data_handler=data_handler,
timeframe=str(tf), timeframe=str(timeframe),
new_pairs_days=new_pairs_days, new_pairs_days=new_pairs_days,
candle_type=candle_type_f, candle_type=candle_type,
erase=erase, erase=erase,
prepend=prepend, prepend=prepend,
) )
progress.update(timeframe_task, advance=1)
if trading_mode == "futures":
# Predefined candletype (and timeframe) depending on exchange
# Downloads what is necessary to backtest based on futures data.
tf_mark = exchange.get_option("mark_ohlcv_timeframe")
tf_funding_rate = exchange.get_option("funding_fee_timeframe")
fr_candle_type = CandleType.from_string(exchange.get_option("mark_ohlcv_price"))
# All exchanges need FundingRate for futures trading.
# The timeframe is aligned to the mark-price timeframe.
combs = ((CandleType.FUNDING_RATE, tf_funding_rate), (fr_candle_type, tf_mark))
for candle_type_f, tf in combs:
logger.debug(f"Downloading pair {pair}, {candle_type_f}, interval {tf}.")
_download_pair_history(
pair=pair,
datadir=datadir,
exchange=exchange,
timerange=timerange,
data_handler=data_handler,
timeframe=str(tf),
new_pairs_days=new_pairs_days,
candle_type=candle_type_f,
erase=erase,
prepend=prepend,
)
progress.update(
timeframe_task, advance=1, description=f"Timeframe {candle_type_f}, {tf}"
)
progress.update(pair_task, advance=1)
progress.update(timeframe_task, description="Timeframe")
return pairs_not_available return pairs_not_available
@ -480,7 +488,7 @@ def _download_trades_history(
return True return True
except Exception: except Exception:
logger.exception(f'Failed to download historic trades for pair: "{pair}". ') logger.exception(f'Failed to download and store historic trades for pair: "{pair}". ')
return False return False
@ -501,25 +509,30 @@ def refresh_backtest_trades_data(
""" """
pairs_not_available = [] pairs_not_available = []
data_handler = get_datahandler(datadir, data_format=data_format) data_handler = get_datahandler(datadir, data_format=data_format)
for pair in pairs: with get_progress_tracker() as progress:
if pair not in exchange.markets: pair_task = progress.add_task("Downloading data...", total=len(pairs))
pairs_not_available.append(pair) for pair in pairs:
logger.info(f"Skipping pair {pair}...") progress.update(pair_task, description=f"Downloading trades [{pair}]")
continue if pair not in exchange.markets:
pairs_not_available.append(pair)
logger.info(f"Skipping pair {pair}...")
continue
if erase: if erase:
if data_handler.trades_purge(pair, trading_mode): if data_handler.trades_purge(pair, trading_mode):
logger.info(f"Deleting existing data for pair {pair}.") logger.info(f"Deleting existing data for pair {pair}.")
logger.info(f"Downloading trades for pair {pair}.")
_download_trades_history(
exchange=exchange,
pair=pair,
new_pairs_days=new_pairs_days,
timerange=timerange,
data_handler=data_handler,
trading_mode=trading_mode,
)
progress.update(pair_task, advance=1)
logger.info(f"Downloading trades for pair {pair}.")
_download_trades_history(
exchange=exchange,
pair=pair,
new_pairs_days=new_pairs_days,
timerange=timerange,
data_handler=data_handler,
trading_mode=trading_mode,
)
return pairs_not_available return pairs_not_available

View File

@ -11,6 +11,7 @@ from freqtrade.exchange.bitpanda import Bitpanda
from freqtrade.exchange.bitvavo import Bitvavo from freqtrade.exchange.bitvavo import Bitvavo
from freqtrade.exchange.bybit import Bybit from freqtrade.exchange.bybit import Bybit
from freqtrade.exchange.coinbasepro import Coinbasepro from freqtrade.exchange.coinbasepro import Coinbasepro
from freqtrade.exchange.cryptocom import Cryptocom
from freqtrade.exchange.exchange_utils import ( from freqtrade.exchange.exchange_utils import (
ROUND_DOWN, ROUND_DOWN,
ROUND_UP, ROUND_UP,

File diff suppressed because it is too large Load Diff

View File

@ -37,7 +37,6 @@ API_FETCH_ORDER_RETRY_COUNT = 5
BAD_EXCHANGES = { BAD_EXCHANGES = {
"bitmex": "Various reasons.", "bitmex": "Various reasons.",
"phemex": "Does not provide history.",
"probit": "Requires additional, regular calls to `signIn()`.", "probit": "Requires additional, regular calls to `signIn()`.",
"poloniex": "Does not provide fetch_order endpoint to fetch both open and closed orders.", "poloniex": "Does not provide fetch_order endpoint to fetch both open and closed orders.",
} }

View File

@ -0,0 +1,19 @@
"""Crypto.com exchange subclass"""
import logging
from typing import Dict
from freqtrade.exchange import Exchange
logger = logging.getLogger(__name__)
class Cryptocom(Exchange):
"""Crypto.com exchange class.
Contains adjustments needed for Freqtrade to work with this exchange.
"""
_ft_has: Dict = {
"ohlcv_candle_limit": 300,
}

View File

@ -22,6 +22,7 @@ from pandas import DataFrame, concat
from freqtrade.constants import ( from freqtrade.constants import (
DEFAULT_AMOUNT_RESERVE_PERCENT, DEFAULT_AMOUNT_RESERVE_PERCENT,
DEFAULT_TRADES_COLUMNS,
NON_OPEN_EXCHANGE_STATES, NON_OPEN_EXCHANGE_STATES,
BidAsk, BidAsk,
BuySell, BuySell,
@ -33,7 +34,13 @@ from freqtrade.constants import (
OBLiteral, OBLiteral,
PairWithTimeframe, PairWithTimeframe,
) )
from freqtrade.data.converter import clean_ohlcv_dataframe, ohlcv_to_dataframe, trades_dict_to_list from freqtrade.data.converter import (
clean_ohlcv_dataframe,
ohlcv_to_dataframe,
trades_df_remove_duplicates,
trades_dict_to_list,
trades_list_to_df,
)
from freqtrade.enums import ( from freqtrade.enums import (
OPTIMIZE_MODES, OPTIMIZE_MODES,
TRADE_MODES, TRADE_MODES,
@ -124,6 +131,7 @@ class Exchange:
"tickers_have_percentage": True, "tickers_have_percentage": True,
"tickers_have_bid_ask": True, # bid / ask empty for fetch_tickers "tickers_have_bid_ask": True, # bid / ask empty for fetch_tickers
"tickers_have_price": True, "tickers_have_price": True,
"trades_limit": 1000, # Limit for 1 call to fetch_trades
"trades_pagination": "time", # Possible are "time" or "id" "trades_pagination": "time", # Possible are "time" or "id"
"trades_pagination_arg": "since", "trades_pagination_arg": "since",
"trades_has_history": False, "trades_has_history": False,
@ -195,6 +203,9 @@ class Exchange:
self._klines: Dict[PairWithTimeframe, DataFrame] = {} self._klines: Dict[PairWithTimeframe, DataFrame] = {}
self._expiring_candle_cache: Dict[Tuple[str, int], PeriodicCache] = {} self._expiring_candle_cache: Dict[Tuple[str, int], PeriodicCache] = {}
# Holds public_trades
self._trades: Dict[PairWithTimeframe, DataFrame] = {}
# Holds all open sell orders for dry_run # Holds all open sell orders for dry_run
self._dry_run_open_orders: Dict[str, Any] = {} self._dry_run_open_orders: Dict[str, Any] = {}
@ -223,6 +234,8 @@ class Exchange:
# Assign this directly for easy access # Assign this directly for easy access
self._ohlcv_partial_candle = self._ft_has["ohlcv_partial_candle"] self._ohlcv_partial_candle = self._ft_has["ohlcv_partial_candle"]
self._max_trades_limit = self._ft_has["trades_limit"]
self._trades_pagination = self._ft_has["trades_pagination"] self._trades_pagination = self._ft_has["trades_pagination"]
self._trades_pagination_arg = self._ft_has["trades_pagination_arg"] self._trades_pagination_arg = self._ft_has["trades_pagination_arg"]
@ -316,6 +329,7 @@ class Exchange:
self.validate_trading_mode_and_margin_mode(self.trading_mode, self.margin_mode) self.validate_trading_mode_and_margin_mode(self.trading_mode, self.margin_mode)
self.validate_pricing(config["exit_pricing"]) self.validate_pricing(config["exit_pricing"])
self.validate_pricing(config["entry_pricing"]) self.validate_pricing(config["entry_pricing"])
self.validate_orderflow(config["exchange"])
def _init_ccxt( def _init_ccxt(
self, exchange_config: Dict[str, Any], sync: bool, ccxt_kwargs: Dict[str, Any] self, exchange_config: Dict[str, Any], sync: bool, ccxt_kwargs: Dict[str, Any]
@ -339,10 +353,14 @@ class Exchange:
raise OperationalException(f"Exchange {name} is not supported by ccxt") raise OperationalException(f"Exchange {name} is not supported by ccxt")
ex_config = { ex_config = {
"apiKey": exchange_config.get("key"), "apiKey": exchange_config.get("apiKey", exchange_config.get("key")),
"secret": exchange_config.get("secret"), "secret": exchange_config.get("secret"),
"password": exchange_config.get("password"), "password": exchange_config.get("password"),
"uid": exchange_config.get("uid", ""), "uid": exchange_config.get("uid", ""),
"accountId": exchange_config.get("accountId", ""),
# DEX attributes:
"walletAddress": exchange_config.get("walletAddress"),
"privateKey": exchange_config.get("privateKey"),
} }
if ccxt_kwargs: if ccxt_kwargs:
logger.info("Applying additional ccxt config: %s", ccxt_kwargs) logger.info("Applying additional ccxt config: %s", ccxt_kwargs)
@ -517,6 +535,15 @@ class Exchange:
else: else:
return DataFrame() return DataFrame()
def trades(self, pair_interval: PairWithTimeframe, copy: bool = True) -> DataFrame:
if pair_interval in self._trades:
if copy:
return self._trades[pair_interval].copy()
else:
return self._trades[pair_interval]
else:
return DataFrame()
def get_contract_size(self, pair: str) -> Optional[float]: def get_contract_size(self, pair: str) -> Optional[float]:
if self.trading_mode == TradingMode.FUTURES: if self.trading_mode == TradingMode.FUTURES:
market = self.markets.get(pair, {}) market = self.markets.get(pair, {})
@ -770,6 +797,14 @@ class Exchange:
f"Time in force policies are not supported for {self.name} yet." f"Time in force policies are not supported for {self.name} yet."
) )
def validate_orderflow(self, exchange: Dict) -> None:
if exchange.get("use_public_trades", False) and (
not self.exchange_has("fetchTrades") or not self._ft_has["trades_has_history"]
):
raise ConfigurationError(
f"Trade data not available for {self.name}. Can't use orderflow feature."
)
def validate_required_startup_candles(self, startup_candles: int, timeframe: str) -> int: def validate_required_startup_candles(self, startup_candles: int, timeframe: str) -> int:
""" """
Checks if required startup_candles is more than ohlcv_candle_limit(). Checks if required startup_candles is more than ohlcv_candle_limit().
@ -2597,6 +2632,171 @@ class Exchange:
data = [[x["timestamp"], x["fundingRate"], 0, 0, 0, 0] for x in data] data = [[x["timestamp"], x["fundingRate"], 0, 0, 0, 0] for x in data]
return data return data
# fetch Trade data stuff
def needed_candle_for_trades_ms(self, timeframe: str, candle_type: CandleType) -> int:
candle_limit = self.ohlcv_candle_limit(timeframe, candle_type)
tf_s = timeframe_to_seconds(timeframe)
candles_fetched = candle_limit * self.required_candle_call_count
max_candles = self._config["orderflow"]["max_candles"]
required_candles = min(max_candles, candles_fetched)
move_to = (
tf_s * candle_limit * required_candles
if required_candles > candle_limit
else (max_candles + 1) * tf_s
)
now = timeframe_to_next_date(timeframe)
return int((now - timedelta(seconds=move_to)).timestamp() * 1000)
def _process_trades_df(
self,
pair: str,
timeframe: str,
c_type: CandleType,
ticks: List[List],
cache: bool,
first_required_candle_date: int,
) -> DataFrame:
# keeping parsed dataframe in cache
trades_df = trades_list_to_df(ticks, True)
if cache:
if (pair, timeframe, c_type) in self._trades:
old = self._trades[(pair, timeframe, c_type)]
# Reassign so we return the updated, combined df
combined_df = concat([old, trades_df], axis=0)
logger.debug(f"Clean duplicated ticks from Trades data {pair}")
trades_df = DataFrame(
trades_df_remove_duplicates(combined_df), columns=combined_df.columns
)
# Age out old candles
trades_df = trades_df[first_required_candle_date < trades_df["timestamp"]]
trades_df = trades_df.reset_index(drop=True)
self._trades[(pair, timeframe, c_type)] = trades_df
return trades_df
def refresh_latest_trades(
self,
pair_list: ListPairsWithTimeframes,
*,
cache: bool = True,
) -> Dict[PairWithTimeframe, DataFrame]:
"""
Refresh in-memory TRADES asynchronously and set `_trades` with the result
Loops asynchronously over pair_list and downloads all pairs async (semi-parallel).
Only used in the dataprovider.refresh() method.
:param pair_list: List of 3 element tuples containing (pair, timeframe, candle_type)
:param cache: Assign result to _trades. Useful for one-off downloads like for pairlists
:return: Dict of [{(pair, timeframe): Dataframe}]
"""
from freqtrade.data.history import get_datahandler
data_handler = get_datahandler(
self._config["datadir"], data_format=self._config["dataformat_trades"]
)
logger.debug("Refreshing TRADES data for %d pairs", len(pair_list))
since_ms = None
results_df = {}
for pair, timeframe, candle_type in set(pair_list):
new_ticks: List = []
all_stored_ticks_df = DataFrame(columns=DEFAULT_TRADES_COLUMNS + ["date"])
first_candle_ms = self.needed_candle_for_trades_ms(timeframe, candle_type)
# refresh, if
# a. not in _trades
# b. no cache used
# c. need new data
is_in_cache = (pair, timeframe, candle_type) in self._trades
if (
not is_in_cache
or not cache
or self._now_is_time_to_refresh_trades(pair, timeframe, candle_type)
):
logger.debug(f"Refreshing TRADES data for {pair}")
# fetch trades since latest _trades and
# store together with existing trades
try:
until = None
from_id = None
if is_in_cache:
from_id = self._trades[(pair, timeframe, candle_type)].iloc[-1]["id"]
until = dt_ts() # now
else:
until = int(timeframe_to_prev_date(timeframe).timestamp()) * 1000
all_stored_ticks_df = data_handler.trades_load(
f"{pair}-cached", self.trading_mode
)
if not all_stored_ticks_df.empty:
if (
all_stored_ticks_df.iloc[-1]["timestamp"] > first_candle_ms
and all_stored_ticks_df.iloc[0]["timestamp"] <= first_candle_ms
):
# Use cache and populate further
last_cached_ms = all_stored_ticks_df.iloc[-1]["timestamp"]
from_id = all_stored_ticks_df.iloc[-1]["id"]
# only use cached if it's closer than first_candle_ms
since_ms = (
last_cached_ms
if last_cached_ms > first_candle_ms
else first_candle_ms
)
else:
# Skip cache, it's too old
all_stored_ticks_df = DataFrame(
columns=DEFAULT_TRADES_COLUMNS + ["date"]
)
# from_id overrules with exchange set to id paginate
[_, new_ticks] = self.get_historic_trades(
pair,
since=since_ms if since_ms else first_candle_ms,
until=until,
from_id=from_id,
)
except Exception:
logger.exception(f"Refreshing TRADES data for {pair} failed")
continue
if new_ticks:
all_stored_ticks_list = all_stored_ticks_df[
DEFAULT_TRADES_COLUMNS
].values.tolist()
all_stored_ticks_list.extend(new_ticks)
trades_df = self._process_trades_df(
pair,
timeframe,
candle_type,
all_stored_ticks_list,
cache,
first_required_candle_date=first_candle_ms,
)
results_df[(pair, timeframe, candle_type)] = trades_df
data_handler.trades_store(
f"{pair}-cached", trades_df[DEFAULT_TRADES_COLUMNS], self.trading_mode
)
else:
logger.error(f"No new ticks for {pair}")
return results_df
def _now_is_time_to_refresh_trades(
self, pair: str, timeframe: str, candle_type: CandleType
) -> bool: # Timeframe in seconds
trades = self.trades((pair, timeframe, candle_type), False)
pair_last_refreshed = int(trades.iloc[-1]["timestamp"])
full_candle = (
int(timeframe_to_next_date(timeframe, dt_from_ts(pair_last_refreshed)).timestamp())
* 1000
)
now = dt_ts()
return full_candle <= now
# Fetch historic trades # Fetch historic trades
@retrier_async @retrier_async
@ -2611,10 +2811,11 @@ class Exchange:
returns: List of dicts containing trades, the next iteration value (new "since" or trade_id) returns: List of dicts containing trades, the next iteration value (new "since" or trade_id)
""" """
try: try:
trades_limit = self._max_trades_limit
# fetch trades asynchronously # fetch trades asynchronously
if params: if params:
logger.debug("Fetching trades for pair %s, params: %s ", pair, params) logger.debug("Fetching trades for pair %s, params: %s ", pair, params)
trades = await self._api_async.fetch_trades(pair, params=params, limit=1000) trades = await self._api_async.fetch_trades(pair, params=params, limit=trades_limit)
else: else:
logger.debug( logger.debug(
"Fetching trades for pair %s, since %s %s...", "Fetching trades for pair %s, since %s %s...",
@ -2622,7 +2823,7 @@ class Exchange:
since, since,
"(" + dt_from_ts(since).isoformat() + ") " if since is not None else "", "(" + dt_from_ts(since).isoformat() + ") " if since is not None else "",
) )
trades = await self._api_async.fetch_trades(pair, since=since, limit=1000) trades = await self._api_async.fetch_trades(pair, since=since, limit=trades_limit)
trades = self._trades_contracts_to_amount(trades) trades = self._trades_contracts_to_amount(trades)
pagination_value = self._get_trade_pagination_next_value(trades) pagination_value = self._get_trade_pagination_next_value(trades)
return trades_dict_to_list(trades), pagination_value return trades_dict_to_list(trades), pagination_value
@ -3417,13 +3618,12 @@ class Exchange:
def get_maintenance_ratio_and_amt( def get_maintenance_ratio_and_amt(
self, self,
pair: str, pair: str,
nominal_value: float, notional_value: float,
) -> Tuple[float, Optional[float]]: ) -> Tuple[float, Optional[float]]:
""" """
Important: Must be fetching data from cached values as this is used by backtesting! Important: Must be fetching data from cached values as this is used by backtesting!
:param pair: Market symbol :param pair: Market symbol
:param nominal_value: The total trade amount in quote currency including leverage :param notional_value: The total trade amount in quote currency
maintenance amount only on Binance
:return: (maintenance margin ratio, maintenance amount) :return: (maintenance margin ratio, maintenance amount)
""" """
@ -3440,7 +3640,7 @@ class Exchange:
pair_tiers = self._leverage_tiers[pair] pair_tiers = self._leverage_tiers[pair]
for tier in reversed(pair_tiers): for tier in reversed(pair_tiers):
if nominal_value >= tier["minNotional"]: if notional_value >= tier["minNotional"]:
return (tier["maintenanceMarginRate"], tier["maintAmt"]) return (tier["maintenanceMarginRate"], tier["maintAmt"])
raise ExchangeError("nominal value can not be lower than 0") raise ExchangeError("nominal value can not be lower than 0")
@ -3448,4 +3648,3 @@ class Exchange:
# describes the min amt for a tier, and the lowest tier will always go down to 0 # describes the min amt for a tier, and the lowest tier will always go down to 0
else: else:
raise ExchangeError(f"Cannot get maintenance ratio using {self.name}") raise ExchangeError(f"Cannot get maintenance ratio using {self.name}")
raise ExchangeError(f"Cannot get maintenance ratio using {self.name}")

View File

@ -78,6 +78,12 @@ class ExchangeWS:
finally: finally:
self.__cleanup_called = True self.__cleanup_called = True
def _pop_history(self, paircomb: PairWithTimeframe) -> None:
"""
Remove history for a pair/timeframe combination from ccxt cache
"""
self.ccxt_object.ohlcvs.get(paircomb[0], {}).pop(paircomb[1], None)
def cleanup_expired(self) -> None: def cleanup_expired(self) -> None:
""" """
Remove pairs from watchlist if they've not been requested within Remove pairs from watchlist if they've not been requested within
@ -89,8 +95,10 @@ class ExchangeWS:
timeframe_s = timeframe_to_seconds(timeframe) timeframe_s = timeframe_to_seconds(timeframe)
last_refresh = self.klines_last_request.get(p, 0) last_refresh = self.klines_last_request.get(p, 0)
if last_refresh > 0 and (dt_ts() - last_refresh) > ((timeframe_s + 20) * 1000): if last_refresh > 0 and (dt_ts() - last_refresh) > ((timeframe_s + 20) * 1000):
logger.info(f"Removing {p} from watchlist") logger.info(f"Removing {p} from websocket watchlist.")
self._klines_watching.discard(p) self._klines_watching.discard(p)
# Pop history to avoid getting stale data
self._pop_history(p)
changed = True changed = True
if changed: if changed:
logger.info(f"Removal done: new watch list ({len(self._klines_watching)})") logger.info(f"Removal done: new watch list ({len(self._klines_watching)})")
@ -128,6 +136,7 @@ class ExchangeWS:
logger.info(f"{pair}, {timeframe}, {candle_type} - Task finished - {result}") logger.info(f"{pair}, {timeframe}, {candle_type} - Task finished - {result}")
self._klines_scheduled.discard((pair, timeframe, candle_type)) self._klines_scheduled.discard((pair, timeframe, candle_type))
self._pop_history((pair, timeframe, candle_type))
async def _continuously_async_watch_ohlcv( async def _continuously_async_watch_ohlcv(
self, pair: str, timeframe: str, candle_type: CandleType self, pair: str, timeframe: str, candle_type: CandleType

View File

@ -0,0 +1,24 @@
"""Hyperliquid exchange subclass"""
import logging
from typing import Dict
from freqtrade.exchange import Exchange
logger = logging.getLogger(__name__)
class Hyperliquid(Exchange):
"""Hyperliquid exchange class.
Contains adjustments needed for Freqtrade to work with this exchange.
"""
_ft_has: Dict = {
# Only the most recent 5000 candles are available according to the
# exchange's API documentation.
"ohlcv_has_history": True,
"ohlcv_candle_limit": 5000,
"trades_has_history": False, # Trades endpoint doesn't seem available.
"exchange_has_overrides": {"fetchTrades": False},
}

View File

@ -4,11 +4,13 @@ from pathlib import Path
from typing import Any, Dict, List from typing import Any, Dict, List
import pandas as pd import pandas as pd
from rich.text import Text
from freqtrade.constants import Config from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.optimize.analysis.lookahead import LookaheadAnalysis from freqtrade.optimize.analysis.lookahead import LookaheadAnalysis
from freqtrade.resolvers import StrategyResolver from freqtrade.resolvers import StrategyResolver
from freqtrade.util import print_rich_table
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -53,18 +55,18 @@ class LookaheadAnalysisSubFunctions:
[ [
inst.strategy_obj["location"].parts[-1], inst.strategy_obj["location"].parts[-1],
inst.strategy_obj["name"], inst.strategy_obj["name"],
inst.current_analysis.has_bias, Text("Yes", style="bold red")
if inst.current_analysis.has_bias
else Text("No", style="bold green"),
inst.current_analysis.total_signals, inst.current_analysis.total_signals,
inst.current_analysis.false_entry_signals, inst.current_analysis.false_entry_signals,
inst.current_analysis.false_exit_signals, inst.current_analysis.false_exit_signals,
", ".join(inst.current_analysis.false_indicators), ", ".join(inst.current_analysis.false_indicators),
] ]
) )
from tabulate import tabulate
table = tabulate(data, headers=headers, tablefmt="orgtbl") print_rich_table(data, headers, summary="Lookahead Analysis")
print(table) return data
return table, headers, data
@staticmethod @staticmethod
def export_to_csv(config: Dict[str, Any], lookahead_analysis: List[LookaheadAnalysis]): def export_to_csv(config: Dict[str, Any], lookahead_analysis: List[LookaheadAnalysis]):

View File

@ -7,6 +7,7 @@ from freqtrade.constants import Config
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.optimize.analysis.recursive import RecursiveAnalysis from freqtrade.optimize.analysis.recursive import RecursiveAnalysis
from freqtrade.resolvers import StrategyResolver from freqtrade.resolvers import StrategyResolver
from freqtrade.util import print_rich_table
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -16,9 +17,9 @@ class RecursiveAnalysisSubFunctions:
@staticmethod @staticmethod
def text_table_recursive_analysis_instances(recursive_instances: List[RecursiveAnalysis]): def text_table_recursive_analysis_instances(recursive_instances: List[RecursiveAnalysis]):
startups = recursive_instances[0]._startup_candle startups = recursive_instances[0]._startup_candle
headers = ["indicators"] headers = ["Indicators"]
for candle in startups: for candle in startups:
headers.append(candle) headers.append(str(candle))
data = [] data = []
for inst in recursive_instances: for inst in recursive_instances:
@ -30,13 +31,11 @@ class RecursiveAnalysisSubFunctions:
data.append(temp_data) data.append(temp_data)
if len(data) > 0: if len(data) > 0:
from tabulate import tabulate print_rich_table(data, headers, summary="Recursive Analysis")
table = tabulate(data, headers=headers, tablefmt="orgtbl") return data
print(table)
return table, headers, data
return None, None, data return data
@staticmethod @staticmethod
def calculate_config_overrides(config: Config): def calculate_config_overrides(config: Config):

View File

@ -52,4 +52,4 @@ class EdgeCli:
result = self.edge.calculate(self.config["exchange"]["pair_whitelist"]) result = self.edge.calculate(self.config["exchange"]["pair_whitelist"])
if result: if result:
print("") # blank line for readability print("") # blank line for readability
print(generate_edge_table(self.edge._cached_pairs)) generate_edge_table(self.edge._cached_pairs)

View File

@ -14,19 +14,11 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple from typing import Any, Dict, List, Optional, Tuple
import rapidjson import rapidjson
from colorama import init as colorama_init
from joblib import Parallel, cpu_count, delayed, dump, load, wrap_non_picklable_objects from joblib import Parallel, cpu_count, delayed, dump, load, wrap_non_picklable_objects
from joblib.externals import cloudpickle from joblib.externals import cloudpickle
from pandas import DataFrame from pandas import DataFrame
from rich.progress import ( from rich.align import Align
BarColumn, from rich.console import Console
MofNCompleteColumn,
Progress,
TaskProgressColumn,
TextColumn,
TimeElapsedColumn,
TimeRemainingColumn,
)
from freqtrade.constants import DATETIME_PRINT_FORMAT, FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config from freqtrade.constants import DATETIME_PRINT_FORMAT, FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config
from freqtrade.data.converter import trim_dataframes from freqtrade.data.converter import trim_dataframes
@ -40,6 +32,7 @@ from freqtrade.optimize.backtesting import Backtesting
# Import IHyperOpt and IHyperOptLoss to allow unpickling classes from these modules # Import IHyperOpt and IHyperOptLoss to allow unpickling classes from these modules
from freqtrade.optimize.hyperopt_auto import HyperOptAuto from freqtrade.optimize.hyperopt_auto import HyperOptAuto
from freqtrade.optimize.hyperopt_loss_interface import IHyperOptLoss from freqtrade.optimize.hyperopt_loss_interface import IHyperOptLoss
from freqtrade.optimize.hyperopt_output import HyperoptOutput
from freqtrade.optimize.hyperopt_tools import ( from freqtrade.optimize.hyperopt_tools import (
HyperoptStateContainer, HyperoptStateContainer,
HyperoptTools, HyperoptTools,
@ -47,6 +40,7 @@ from freqtrade.optimize.hyperopt_tools import (
) )
from freqtrade.optimize.optimize_reports import generate_strategy_stats from freqtrade.optimize.optimize_reports import generate_strategy_stats
from freqtrade.resolvers.hyperopt_resolver import HyperOptLossResolver from freqtrade.resolvers.hyperopt_resolver import HyperOptLossResolver
from freqtrade.util import get_progress_tracker
# Suppress scikit-learn FutureWarnings from skopt # Suppress scikit-learn FutureWarnings from skopt
@ -86,6 +80,8 @@ class Hyperopt:
self.max_open_trades_space: List[Dimension] = [] self.max_open_trades_space: List[Dimension] = []
self.dimensions: List[Dimension] = [] self.dimensions: List[Dimension] = []
self._hyper_out: HyperoptOutput = HyperoptOutput()
self.config = config self.config = config
self.min_date: datetime self.min_date: datetime
self.max_date: datetime self.max_date: datetime
@ -260,7 +256,7 @@ class Hyperopt:
result["max_open_trades"] = {"max_open_trades": strategy.max_open_trades} result["max_open_trades"] = {"max_open_trades": strategy.max_open_trades}
return result return result
def print_results(self, results) -> None: def print_results(self, results: Dict[str, Any]) -> None:
""" """
Log results if it is better than any previous evaluation Log results if it is better than any previous evaluation
TODO: this should be moved to HyperoptTools too TODO: this should be moved to HyperoptTools too
@ -268,17 +264,12 @@ class Hyperopt:
is_best = results["is_best"] is_best = results["is_best"]
if self.print_all or is_best: if self.print_all or is_best:
print( self._hyper_out.add_data(
HyperoptTools.get_result_table( self.config,
self.config, [results],
results, self.total_epochs,
self.total_epochs, self.print_all,
self.print_all,
self.print_colorized,
self.hyperopt_table_header,
)
) )
self.hyperopt_table_header = 2
def init_spaces(self): def init_spaces(self):
""" """
@ -626,25 +617,18 @@ class Hyperopt:
self.opt = self.get_optimizer(self.dimensions, config_jobs) self.opt = self.get_optimizer(self.dimensions, config_jobs)
if self.print_colorized:
colorama_init(autoreset=True)
try: try:
with Parallel(n_jobs=config_jobs) as parallel: with Parallel(n_jobs=config_jobs) as parallel:
jobs = parallel._effective_n_jobs() jobs = parallel._effective_n_jobs()
logger.info(f"Effective number of parallel workers used: {jobs}") logger.info(f"Effective number of parallel workers used: {jobs}")
console = Console(
color_system="auto" if self.print_colorized else None,
)
# Define progressbar # Define progressbar
with Progress( with get_progress_tracker(
TextColumn("[progress.description]{task.description}"), console=console,
BarColumn(bar_width=None), cust_objs=[Align.center(self._hyper_out.table)],
MofNCompleteColumn(),
TaskProgressColumn(),
"",
TimeElapsedColumn(),
"",
TimeRemainingColumn(),
expand=True,
) as pbar: ) as pbar:
task = pbar.add_task("Epochs", total=self.total_epochs) task = pbar.add_task("Epochs", total=self.total_epochs)

View File

@ -0,0 +1,123 @@
import sys
from typing import List, Optional, Union
from rich.console import Console
from rich.table import Table
from rich.text import Text
from freqtrade.constants import Config
from freqtrade.optimize.optimize_reports import generate_wins_draws_losses
from freqtrade.util import fmt_coin
class HyperoptOutput:
def __init__(self):
self.table = Table(
title="Hyperopt results",
)
# Headers
self.table.add_column("Best", justify="left")
self.table.add_column("Epoch", justify="right")
self.table.add_column("Trades", justify="right")
self.table.add_column("Win Draw Loss Win%", justify="right")
self.table.add_column("Avg profit", justify="right")
self.table.add_column("Profit", justify="right")
self.table.add_column("Avg duration", justify="right")
self.table.add_column("Objective", justify="right")
self.table.add_column("Max Drawdown (Acct)", justify="right")
def _add_row(self, data: List[Union[str, Text]]):
"""Add single row"""
row_to_add: List[Union[str, Text]] = [r if isinstance(r, Text) else str(r) for r in data]
self.table.add_row(*row_to_add)
def _add_rows(self, data: List[List[Union[str, Text]]]):
"""add multiple rows"""
for row in data:
self._add_row(row)
def print(self, console: Optional[Console] = None, *, print_colorized=True):
if not console:
console = Console(
color_system="auto" if print_colorized else None,
width=200 if "pytest" in sys.modules else None,
)
console.print(self.table)
def add_data(
self,
config: Config,
results: list,
total_epochs: int,
highlight_best: bool,
) -> None:
"""Format one or multiple rows and add them"""
stake_currency = config["stake_currency"]
for r in results:
self.table.add_row(
*[
# "Best":
(
("*" if r["is_initial_point"] or r["is_random"] else "")
+ (" Best" if r["is_best"] else "")
).lstrip(),
# "Epoch":
f"{r['current_epoch']}/{total_epochs}",
# "Trades":
str(r["results_metrics"]["total_trades"]),
# "Win Draw Loss Win%":
generate_wins_draws_losses(
r["results_metrics"]["wins"],
r["results_metrics"]["draws"],
r["results_metrics"]["losses"],
),
# "Avg profit":
f"{r['results_metrics']['profit_mean']:.2%}"
if r["results_metrics"]["profit_mean"] is not None
else "--",
# "Profit":
Text(
"{} {}".format(
fmt_coin(
r["results_metrics"]["profit_total_abs"],
stake_currency,
keep_trailing_zeros=True,
),
f"({r['results_metrics']['profit_total']:,.2%})".rjust(10, " "),
)
if r["results_metrics"].get("profit_total_abs", 0) != 0.0
else "--",
style=(
"green"
if r["results_metrics"].get("profit_total_abs", 0) > 0
else "red"
)
if not r["is_best"]
else "",
),
# "Avg duration":
str(r["results_metrics"]["holding_avg"]),
# "Objective":
f"{r['loss']:,.5f}" if r["loss"] != 100000 else "N/A",
# "Max Drawdown (Acct)":
"{} {}".format(
fmt_coin(
r["results_metrics"]["max_drawdown_abs"],
stake_currency,
keep_trailing_zeros=True,
),
(f"({r['results_metrics']['max_drawdown_account']:,.2%})").rjust(10, " "),
)
if r["results_metrics"]["max_drawdown_account"] != 0.0
else "--",
],
style=" ".join(
[
"bold gold1" if r["is_best"] and highlight_best else "",
"italic " if r["is_initial_point"] else "",
]
),
)

View File

@ -5,10 +5,7 @@ from pathlib import Path
from typing import Any, Dict, Iterator, List, Optional, Tuple from typing import Any, Dict, Iterator, List, Optional, Tuple
import numpy as np import numpy as np
import pandas as pd
import rapidjson import rapidjson
import tabulate
from colorama import Fore, Style
from pandas import isna, json_normalize from pandas import isna, json_normalize
from freqtrade.constants import FTHYPT_FILEVERSION, Config from freqtrade.constants import FTHYPT_FILEVERSION, Config
@ -16,8 +13,6 @@ from freqtrade.enums import HyperoptState
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.misc import deep_merge_dicts, round_dict, safe_value_fallback2 from freqtrade.misc import deep_merge_dicts, round_dict, safe_value_fallback2
from freqtrade.optimize.hyperopt_epoch_filters import hyperopt_filter_epochs from freqtrade.optimize.hyperopt_epoch_filters import hyperopt_filter_epochs
from freqtrade.optimize.optimize_reports import generate_wins_draws_losses
from freqtrade.util import fmt_coin
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -357,175 +352,6 @@ class HyperoptTools:
+ f"Objective: {results['loss']:.5f}" + f"Objective: {results['loss']:.5f}"
) )
@staticmethod
def prepare_trials_columns(trials: pd.DataFrame) -> pd.DataFrame:
trials["Best"] = ""
if "results_metrics.winsdrawslosses" not in trials.columns:
# Ensure compatibility with older versions of hyperopt results
trials["results_metrics.winsdrawslosses"] = "N/A"
has_account_drawdown = "results_metrics.max_drawdown_account" in trials.columns
if not has_account_drawdown:
# Ensure compatibility with older versions of hyperopt results
trials["results_metrics.max_drawdown_account"] = None
if "is_random" not in trials.columns:
trials["is_random"] = False
# New mode, using backtest result for metrics
trials["results_metrics.winsdrawslosses"] = trials.apply(
lambda x: generate_wins_draws_losses(
x["results_metrics.wins"], x["results_metrics.draws"], x["results_metrics.losses"]
),
axis=1,
)
trials = trials[
[
"Best",
"current_epoch",
"results_metrics.total_trades",
"results_metrics.winsdrawslosses",
"results_metrics.profit_mean",
"results_metrics.profit_total_abs",
"results_metrics.profit_total",
"results_metrics.holding_avg",
"results_metrics.max_drawdown_account",
"results_metrics.max_drawdown_abs",
"loss",
"is_initial_point",
"is_random",
"is_best",
]
]
trials.columns = [
"Best",
"Epoch",
"Trades",
" Win Draw Loss Win%",
"Avg profit",
"Total profit",
"Profit",
"Avg duration",
"max_drawdown_account",
"max_drawdown_abs",
"Objective",
"is_initial_point",
"is_random",
"is_best",
]
return trials
@staticmethod
def get_result_table(
config: Config,
results: list,
total_epochs: int,
highlight_best: bool,
print_colorized: bool,
remove_header: int,
) -> str:
"""
Log result table
"""
if not results:
return ""
tabulate.PRESERVE_WHITESPACE = True
trials = json_normalize(results, max_level=1)
trials = HyperoptTools.prepare_trials_columns(trials)
trials["is_profit"] = False
trials.loc[trials["is_initial_point"] | trials["is_random"], "Best"] = "* "
trials.loc[trials["is_best"], "Best"] = "Best"
trials.loc[
(trials["is_initial_point"] | trials["is_random"]) & trials["is_best"], "Best"
] = "* Best"
trials.loc[trials["Total profit"] > 0, "is_profit"] = True
trials["Trades"] = trials["Trades"].astype(str)
# perc_multi = 1 if legacy_mode else 100
trials["Epoch"] = trials["Epoch"].apply(
lambda x: "{}/{}".format(str(x).rjust(len(str(total_epochs)), " "), total_epochs)
)
trials["Avg profit"] = trials["Avg profit"].apply(
lambda x: f"{x:,.2%}".rjust(7, " ") if not isna(x) else "--".rjust(7, " ")
)
trials["Avg duration"] = trials["Avg duration"].apply(
lambda x: (
f"{x:,.1f} m".rjust(7, " ")
if isinstance(x, float)
else f"{x}"
if not isna(x)
else "--".rjust(7, " ")
)
)
trials["Objective"] = trials["Objective"].apply(
lambda x: f"{x:,.5f}".rjust(8, " ") if x != 100000 else "N/A".rjust(8, " ")
)
stake_currency = config["stake_currency"]
trials["Max Drawdown (Acct)"] = trials.apply(
lambda x: (
"{} {}".format(
fmt_coin(x["max_drawdown_abs"], stake_currency, keep_trailing_zeros=True),
(f"({x['max_drawdown_account']:,.2%})").rjust(10, " "),
).rjust(25 + len(stake_currency))
if x["max_drawdown_account"] != 0.0
else "--".rjust(25 + len(stake_currency))
),
axis=1,
)
trials = trials.drop(columns=["max_drawdown_abs", "max_drawdown_account"])
trials["Profit"] = trials.apply(
lambda x: (
"{} {}".format(
fmt_coin(x["Total profit"], stake_currency, keep_trailing_zeros=True),
f"({x['Profit']:,.2%})".rjust(10, " "),
).rjust(25 + len(stake_currency))
if x["Total profit"] != 0.0
else "--".rjust(25 + len(stake_currency))
),
axis=1,
)
trials = trials.drop(columns=["Total profit"])
if print_colorized:
trials2 = trials.astype(str)
for i in range(len(trials)):
if trials.loc[i]["is_profit"]:
for j in range(len(trials.loc[i]) - 3):
trials2.iat[i, j] = f"{Fore.GREEN}{str(trials.iloc[i, j])}{Fore.RESET}"
if trials.loc[i]["is_best"] and highlight_best:
for j in range(len(trials.loc[i]) - 3):
trials2.iat[i, j] = (
f"{Style.BRIGHT}{str(trials.iloc[i, j])}{Style.RESET_ALL}"
)
trials = trials2
del trials2
trials = trials.drop(columns=["is_initial_point", "is_best", "is_profit", "is_random"])
if remove_header > 0:
table = tabulate.tabulate(
trials.to_dict(orient="list"), tablefmt="orgtbl", headers="keys", stralign="right"
)
table = table.split("\n", remove_header)[remove_header]
elif remove_header < 0:
table = tabulate.tabulate(
trials.to_dict(orient="list"), tablefmt="psql", headers="keys", stralign="right"
)
table = "\n".join(table.split("\n")[0:remove_header])
else:
table = tabulate.tabulate(
trials.to_dict(orient="list"), tablefmt="psql", headers="keys", stralign="right"
)
return table
@staticmethod @staticmethod
def export_csv_file(config: Config, results: list, csv_file: str) -> None: def export_csv_file(config: Config, results: list, csv_file: str) -> None:
""" """

View File

@ -1,12 +1,10 @@
import logging import logging
from typing import Any, Dict, List, Union from typing import Any, Dict, List, Literal, Union
from tabulate import tabulate
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT, Config from freqtrade.constants import UNLIMITED_STAKE_AMOUNT, Config
from freqtrade.optimize.optimize_reports.optimize_reports import generate_periodic_breakdown_stats from freqtrade.optimize.optimize_reports.optimize_reports import generate_periodic_breakdown_stats
from freqtrade.types import BacktestResultType from freqtrade.types import BacktestResultType
from freqtrade.util import decimals_per_coin, fmt_coin from freqtrade.util import decimals_per_coin, fmt_coin, print_rich_table
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -46,22 +44,23 @@ def generate_wins_draws_losses(wins, draws, losses):
return f"{wins:>4} {draws:>4} {losses:>4} {wl_ratio:>4}" return f"{wins:>4} {draws:>4} {losses:>4} {wl_ratio:>4}"
def text_table_bt_results(pair_results: List[Dict[str, Any]], stake_currency: str) -> str: def text_table_bt_results(
pair_results: List[Dict[str, Any]], stake_currency: str, title: str
) -> None:
""" """
Generates and returns a text table for the given backtest data and the results dataframe Generates and returns a text table for the given backtest data and the results dataframe
:param pair_results: List of Dictionaries - one entry per pair + final TOTAL row :param pair_results: List of Dictionaries - one entry per pair + final TOTAL row
:param stake_currency: stake-currency - used to correctly name headers :param stake_currency: stake-currency - used to correctly name headers
:return: pretty printed table with tabulate as string :param title: Title of the table
""" """
headers = _get_line_header("Pair", stake_currency, "Trades") headers = _get_line_header("Pair", stake_currency, "Trades")
floatfmt = _get_line_floatfmt(stake_currency)
output = [ output = [
[ [
t["key"], t["key"],
t["trades"], t["trades"],
t["profit_mean_pct"], t["profit_mean_pct"],
t["profit_total_abs"], f"{t['profit_total_abs']:.{decimals_per_coin(stake_currency)}f}",
t["profit_total_pct"], t["profit_total_pct"],
t["duration_avg"], t["duration_avg"],
generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]), generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]),
@ -69,26 +68,32 @@ def text_table_bt_results(pair_results: List[Dict[str, Any]], stake_currency: st
for t in pair_results for t in pair_results
] ]
# Ignore type as floatfmt does allow tuples but mypy does not know that # Ignore type as floatfmt does allow tuples but mypy does not know that
return tabulate(output, headers=headers, floatfmt=floatfmt, tablefmt="orgtbl", stralign="right") print_rich_table(output, headers, summary=title)
def text_table_tags(tag_type: str, tag_results: List[Dict[str, Any]], stake_currency: str) -> str: def text_table_tags(
tag_type: Literal["enter_tag", "exit_tag", "mix_tag"],
tag_results: List[Dict[str, Any]],
stake_currency: str,
) -> None:
""" """
Generates and returns a text table for the given backtest data and the results dataframe Generates and returns a text table for the given backtest data and the results dataframe
:param pair_results: List of Dictionaries - one entry per pair + final TOTAL row :param pair_results: List of Dictionaries - one entry per pair + final TOTAL row
:param stake_currency: stake-currency - used to correctly name headers :param stake_currency: stake-currency - used to correctly name headers
:return: pretty printed table with tabulate as string
""" """
floatfmt = _get_line_floatfmt(stake_currency) floatfmt = _get_line_floatfmt(stake_currency)
fallback: str = "" fallback: str = ""
is_list = False is_list = False
if tag_type == "enter_tag": if tag_type == "enter_tag":
headers = _get_line_header("Enter Tag", stake_currency, "Entries") title = "Enter Tag"
headers = _get_line_header(title, stake_currency, "Entries")
elif tag_type == "exit_tag": elif tag_type == "exit_tag":
headers = _get_line_header("Exit Reason", stake_currency, "Exits") title = "Exit Reason"
headers = _get_line_header(title, stake_currency, "Exits")
fallback = "exit_reason" fallback = "exit_reason"
else: else:
# Mix tag # Mix tag
title = "Mixed Tag"
headers = _get_line_header(["Enter Tag", "Exit Reason"], stake_currency, "Trades") headers = _get_line_header(["Enter Tag", "Exit Reason"], stake_currency, "Trades")
floatfmt.insert(0, "s") floatfmt.insert(0, "s")
is_list = True is_list = True
@ -106,7 +111,7 @@ def text_table_tags(tag_type: str, tag_results: List[Dict[str, Any]], stake_curr
), ),
t["trades"], t["trades"],
t["profit_mean_pct"], t["profit_mean_pct"],
t["profit_total_abs"], f"{t['profit_total_abs']:.{decimals_per_coin(stake_currency)}f}",
t["profit_total_pct"], t["profit_total_pct"],
t.get("duration_avg"), t.get("duration_avg"),
generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]), generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]),
@ -114,17 +119,16 @@ def text_table_tags(tag_type: str, tag_results: List[Dict[str, Any]], stake_curr
for t in tag_results for t in tag_results
] ]
# Ignore type as floatfmt does allow tuples but mypy does not know that # Ignore type as floatfmt does allow tuples but mypy does not know that
return tabulate(output, headers=headers, floatfmt=floatfmt, tablefmt="orgtbl", stralign="right") print_rich_table(output, headers, summary=f"{title.upper()} STATS")
def text_table_periodic_breakdown( def text_table_periodic_breakdown(
days_breakdown_stats: List[Dict[str, Any]], stake_currency: str, period: str days_breakdown_stats: List[Dict[str, Any]], stake_currency: str, period: str
) -> str: ) -> None:
""" """
Generate small table with Backtest results by days Generate small table with Backtest results by days
:param days_breakdown_stats: Days breakdown metrics :param days_breakdown_stats: Days breakdown metrics
:param stake_currency: Stakecurrency used :param stake_currency: Stakecurrency used
:return: pretty printed table with tabulate as string
""" """
headers = [ headers = [
period.capitalize(), period.capitalize(),
@ -143,17 +147,15 @@ def text_table_periodic_breakdown(
] ]
for d in days_breakdown_stats for d in days_breakdown_stats
] ]
return tabulate(output, headers=headers, tablefmt="orgtbl", stralign="right") print_rich_table(output, headers, summary=f"{period.upper()} BREAKDOWN")
def text_table_strategy(strategy_results, stake_currency: str) -> str: def text_table_strategy(strategy_results, stake_currency: str, title: str):
""" """
Generate summary table per strategy Generate summary table per strategy
:param strategy_results: Dict of <Strategyname: DataFrame> containing results for all strategies :param strategy_results: Dict of <Strategyname: DataFrame> containing results for all strategies
:param stake_currency: stake-currency - used to correctly name headers :param stake_currency: stake-currency - used to correctly name headers
:return: pretty printed table with tabulate as string
""" """
floatfmt = _get_line_floatfmt(stake_currency)
headers = _get_line_header("Strategy", stake_currency, "Trades") headers = _get_line_header("Strategy", stake_currency, "Trades")
# _get_line_header() is also used for per-pair summary. Per-pair drawdown is mostly useless # _get_line_header() is also used for per-pair summary. Per-pair drawdown is mostly useless
# therefore we slip this column in only for strategy summary here. # therefore we slip this column in only for strategy summary here.
@ -177,8 +179,8 @@ def text_table_strategy(strategy_results, stake_currency: str) -> str:
[ [
t["key"], t["key"],
t["trades"], t["trades"],
t["profit_mean_pct"], f"{t['profit_mean_pct']:.2f}",
t["profit_total_abs"], f"{t['profit_total_abs']:.{decimals_per_coin(stake_currency)}f}",
t["profit_total_pct"], t["profit_total_pct"],
t["duration_avg"], t["duration_avg"],
generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]), generate_wins_draws_losses(t["wins"], t["draws"], t["losses"]),
@ -186,11 +188,10 @@ def text_table_strategy(strategy_results, stake_currency: str) -> str:
] ]
for t, drawdown in zip(strategy_results, drawdown) for t, drawdown in zip(strategy_results, drawdown)
] ]
# Ignore type as floatfmt does allow tuples but mypy does not know that print_rich_table(output, headers, summary=title)
return tabulate(output, headers=headers, floatfmt=floatfmt, tablefmt="orgtbl", stralign="right")
def text_table_add_metrics(strat_results: Dict) -> str: def text_table_add_metrics(strat_results: Dict) -> None:
if len(strat_results["trades"]) > 0: if len(strat_results["trades"]) > 0:
best_trade = max(strat_results["trades"], key=lambda x: x["profit_ratio"]) best_trade = max(strat_results["trades"], key=lambda x: x["profit_ratio"])
worst_trade = min(strat_results["trades"], key=lambda x: x["profit_ratio"]) worst_trade = min(strat_results["trades"], key=lambda x: x["profit_ratio"])
@ -372,8 +373,8 @@ def text_table_add_metrics(strat_results: Dict) -> str:
*drawdown_metrics, *drawdown_metrics,
("Market change", f"{strat_results['market_change']:.2%}"), ("Market change", f"{strat_results['market_change']:.2%}"),
] ]
print_rich_table(metrics, ["Metric", "Value"], summary="SUMMARY METRICS", justify="left")
return tabulate(metrics, headers=["Metric", "Value"], tablefmt="orgtbl")
else: else:
start_balance = fmt_coin(strat_results["starting_balance"], strat_results["stake_currency"]) start_balance = fmt_coin(strat_results["starting_balance"], strat_results["stake_currency"])
stake_amount = ( stake_amount = (
@ -387,7 +388,7 @@ def text_table_add_metrics(strat_results: Dict) -> str:
f"Your starting balance was {start_balance}, " f"Your starting balance was {start_balance}, "
f"and your stake was {stake_amount}." f"and your stake was {stake_amount}."
) )
return message print(message)
def _show_tag_subresults(results: Dict[str, Any], stake_currency: str): def _show_tag_subresults(results: Dict[str, Any], stake_currency: str):
@ -395,25 +396,13 @@ def _show_tag_subresults(results: Dict[str, Any], stake_currency: str):
Print tag subresults (enter_tag, exit_reason_summary, mix_tag_stats) Print tag subresults (enter_tag, exit_reason_summary, mix_tag_stats)
""" """
if (enter_tags := results.get("results_per_enter_tag")) is not None: if (enter_tags := results.get("results_per_enter_tag")) is not None:
table = text_table_tags("enter_tag", enter_tags, stake_currency) text_table_tags("enter_tag", enter_tags, stake_currency)
if isinstance(table, str) and len(table) > 0:
print(" ENTER TAG STATS ".center(len(table.splitlines()[0]), "="))
print(table)
if (exit_reasons := results.get("exit_reason_summary")) is not None: if (exit_reasons := results.get("exit_reason_summary")) is not None:
table = text_table_tags("exit_tag", exit_reasons, stake_currency) text_table_tags("exit_tag", exit_reasons, stake_currency)
if isinstance(table, str) and len(table) > 0:
print(" EXIT REASON STATS ".center(len(table.splitlines()[0]), "="))
print(table)
if (mix_tag := results.get("mix_tag_stats")) is not None: if (mix_tag := results.get("mix_tag_stats")) is not None:
table = text_table_tags("mix_tag", mix_tag, stake_currency) text_table_tags("mix_tag", mix_tag, stake_currency)
if isinstance(table, str) and len(table) > 0:
print(" MIXED TAG STATS ".center(len(table.splitlines()[0]), "="))
print(table)
def show_backtest_result( def show_backtest_result(
@ -424,15 +413,12 @@ def show_backtest_result(
""" """
# Print results # Print results
print(f"Result for strategy {strategy}") print(f"Result for strategy {strategy}")
table = text_table_bt_results(results["results_per_pair"], stake_currency=stake_currency) text_table_bt_results(
if isinstance(table, str): results["results_per_pair"], stake_currency=stake_currency, title="BACKTESTING REPORT"
print(" BACKTESTING REPORT ".center(len(table.splitlines()[0]), "=")) )
print(table) text_table_bt_results(
results["left_open_trades"], stake_currency=stake_currency, title="LEFT OPEN TRADES REPORT"
table = text_table_bt_results(results["left_open_trades"], stake_currency=stake_currency) )
if isinstance(table, str) and len(table) > 0:
print(" LEFT OPEN TRADES REPORT ".center(len(table.splitlines()[0]), "="))
print(table)
_show_tag_subresults(results, stake_currency) _show_tag_subresults(results, stake_currency)
@ -443,20 +429,11 @@ def show_backtest_result(
days_breakdown_stats = generate_periodic_breakdown_stats( days_breakdown_stats = generate_periodic_breakdown_stats(
trade_list=results["trades"], period=period trade_list=results["trades"], period=period
) )
table = text_table_periodic_breakdown( text_table_periodic_breakdown(
days_breakdown_stats=days_breakdown_stats, stake_currency=stake_currency, period=period days_breakdown_stats=days_breakdown_stats, stake_currency=stake_currency, period=period
) )
if isinstance(table, str) and len(table) > 0:
print(f" {period.upper()} BREAKDOWN ".center(len(table.splitlines()[0]), "="))
print(table)
table = text_table_add_metrics(results) text_table_add_metrics(results)
if isinstance(table, str) and len(table) > 0:
print(" SUMMARY METRICS ".center(len(table.splitlines()[0]), "="))
print(table)
if isinstance(table, str) and len(table) > 0:
print("=" * len(table.splitlines()[0]))
print() print()
@ -472,15 +449,13 @@ def show_backtest_results(config: Config, backtest_stats: BacktestResultType):
if len(backtest_stats["strategy"]) > 0: if len(backtest_stats["strategy"]) > 0:
# Print Strategy summary table # Print Strategy summary table
table = text_table_strategy(backtest_stats["strategy_comparison"], stake_currency)
print( print(
f"Backtested {results['backtest_start']} -> {results['backtest_end']} |" f"Backtested {results['backtest_start']} -> {results['backtest_end']} |"
f" Max open trades : {results['max_open_trades']}" f" Max open trades : {results['max_open_trades']}"
) )
print(" STRATEGY SUMMARY ".center(len(table.splitlines()[0]), "=")) text_table_strategy(
print(table) backtest_stats["strategy_comparison"], stake_currency, "STRATEGY SUMMARY"
print("=" * len(table.splitlines()[0])) )
print("\nFor more details, please look at the detail tables above")
def show_sorted_pairlist(config: Config, backtest_stats: BacktestResultType): def show_sorted_pairlist(config: Config, backtest_stats: BacktestResultType):
@ -493,8 +468,7 @@ def show_sorted_pairlist(config: Config, backtest_stats: BacktestResultType):
print("]") print("]")
def generate_edge_table(results: dict) -> str: def generate_edge_table(results: dict) -> None:
floatfmt = ("s", ".10g", ".2f", ".2f", ".2f", ".2f", "d", "d", "d")
tabular_data = [] tabular_data = []
headers = [ headers = [
"Pair", "Pair",
@ -512,17 +486,13 @@ def generate_edge_table(results: dict) -> str:
tabular_data.append( tabular_data.append(
[ [
result[0], result[0],
result[1].stoploss, f"{result[1].stoploss:.10g}",
result[1].winrate, f"{result[1].winrate:.2f}",
result[1].risk_reward_ratio, f"{result[1].risk_reward_ratio:.2f}",
result[1].required_risk_reward, f"{result[1].required_risk_reward:.2f}",
result[1].expectancy, f"{result[1].expectancy:.2f}",
result[1].nb_trades, result[1].nb_trades,
round(result[1].avg_trade_duration), round(result[1].avg_trade_duration),
] ]
) )
print_rich_table(tabular_data, headers, summary="EDGE TABLE")
# Ignore type as floatfmt does allow tuples but mypy does not know that
return tabulate(
tabular_data, headers=headers, floatfmt=floatfmt, tablefmt="orgtbl", stralign="right"
)

View File

@ -1,4 +1,3 @@
import contextlib
import threading import threading
import time import time
@ -53,7 +52,6 @@ class UvicornServer(uvicorn.Server):
loop = asyncio.new_event_loop() loop = asyncio.new_event_loop()
loop.run_until_complete(self.serve(sockets=sockets)) loop.run_until_complete(self.serve(sockets=sockets))
@contextlib.contextmanager
def run_in_thread(self): def run_in_thread(self):
self.thread = threading.Thread(target=self.run, name="FTUvicorn") self.thread = threading.Thread(target=self.run, name="FTUvicorn")
self.thread.start() self.thread.start()

View File

@ -1401,19 +1401,21 @@ class Telegram(RPCHandler):
nrecent = int(context.args[0]) if context.args else 10 nrecent = int(context.args[0]) if context.args else 10
except (TypeError, ValueError, IndexError): except (TypeError, ValueError, IndexError):
nrecent = 10 nrecent = 10
nonspot = self._config.get("trading_mode", TradingMode.SPOT) != TradingMode.SPOT
trades = self._rpc._rpc_trade_history(nrecent) trades = self._rpc._rpc_trade_history(nrecent)
trades_tab = tabulate( trades_tab = tabulate(
[ [
[ [
dt_humanize_delta(dt_from_ts(trade["close_timestamp"])), dt_humanize_delta(dt_from_ts(trade["close_timestamp"])),
trade["pair"] + " (#" + str(trade["trade_id"]) + ")", f"{trade['pair']} (#{trade['trade_id']}"
f"{(' ' + ('S' if trade['is_short'] else 'L')) if nonspot else ''})",
f"{(trade['close_profit']):.2%} ({trade['close_profit_abs']})", f"{(trade['close_profit']):.2%} ({trade['close_profit_abs']})",
] ]
for trade in trades["trades"] for trade in trades["trades"]
], ],
headers=[ headers=[
"Close Date", "Close Date",
"Pair (ID)", "Pair (ID L/S)" if nonspot else "Pair (ID)",
f"Profit ({stake_cur})", f"Profit ({stake_cur})",
], ],
tablefmt="simple", tablefmt="simple",

View File

@ -5,6 +5,7 @@ This module defines the interface to apply for strategies
import logging import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections import OrderedDict
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from math import isinf, isnan from math import isinf, isnan
from typing import Dict, List, Optional, Tuple, Union from typing import Dict, List, Optional, Tuple, Union
@ -12,6 +13,7 @@ from typing import Dict, List, Optional, Tuple, Union
from pandas import DataFrame from pandas import DataFrame
from freqtrade.constants import CUSTOM_TAG_MAX_LENGTH, Config, IntOrInf, ListPairsWithTimeframes from freqtrade.constants import CUSTOM_TAG_MAX_LENGTH, Config, IntOrInf, ListPairsWithTimeframes
from freqtrade.data.converter import populate_dataframe_with_trades
from freqtrade.data.dataprovider import DataProvider from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import ( from freqtrade.enums import (
CandleType, CandleType,
@ -139,6 +141,11 @@ class IStrategy(ABC, HyperStrategyMixin):
# A self set parameter that represents the market direction. filled from configuration # A self set parameter that represents the market direction. filled from configuration
market_direction: MarketDirection = MarketDirection.NONE market_direction: MarketDirection = MarketDirection.NONE
# Global cache dictionary
_cached_grouped_trades_per_pair: Dict[
str, OrderedDict[Tuple[datetime, datetime], DataFrame]
] = {}
def __init__(self, config: Config) -> None: def __init__(self, config: Config) -> None:
self.config = config self.config = config
# Dict to determine if analysis is necessary # Dict to determine if analysis is necessary
@ -1040,6 +1047,7 @@ class IStrategy(ABC, HyperStrategyMixin):
dataframe = self.advise_indicators(dataframe, metadata) dataframe = self.advise_indicators(dataframe, metadata)
dataframe = self.advise_entry(dataframe, metadata) dataframe = self.advise_entry(dataframe, metadata)
dataframe = self.advise_exit(dataframe, metadata) dataframe = self.advise_exit(dataframe, metadata)
logger.debug("TA Analysis Ended")
return dataframe return dataframe
def _analyze_ticker_internal(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def _analyze_ticker_internal(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
@ -1594,6 +1602,29 @@ class IStrategy(ABC, HyperStrategyMixin):
dataframe = self.advise_exit(dataframe, metadata) dataframe = self.advise_exit(dataframe, metadata)
return dataframe return dataframe
def _if_enabled_populate_trades(self, dataframe: DataFrame, metadata: dict):
use_public_trades = self.config.get("exchange", {}).get("use_public_trades", False)
if use_public_trades:
trades = self.dp.trades(pair=metadata["pair"], copy=False)
config = self.config
config["timeframe"] = self.timeframe
pair = metadata["pair"]
# TODO: slice trades to size of dataframe for faster backtesting
cached_grouped_trades: OrderedDict[Tuple[datetime, datetime], DataFrame] = (
self._cached_grouped_trades_per_pair.get(pair, OrderedDict())
)
dataframe, cached_grouped_trades = populate_dataframe_with_trades(
cached_grouped_trades, config, dataframe, trades
)
# dereference old cache
if pair in self._cached_grouped_trades_per_pair:
del self._cached_grouped_trades_per_pair[pair]
self._cached_grouped_trades_per_pair[pair] = cached_grouped_trades
logger.debug("Populated dataframe with trades.")
def advise_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def advise_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
""" """
Populate indicators that will be used in the Buy, Sell, short, exit_short strategy Populate indicators that will be used in the Buy, Sell, short, exit_short strategy
@ -1610,6 +1641,7 @@ class IStrategy(ABC, HyperStrategyMixin):
self, dataframe, metadata, inf_data, populate_fn self, dataframe, metadata, inf_data, populate_fn
) )
self._if_enabled_populate_trades(dataframe, metadata)
return self.populate_indicators(dataframe, metadata) return self.populate_indicators(dataframe, metadata)
def advise_entry(self, dataframe: DataFrame, metadata: dict) -> DataFrame: def advise_entry(self, dataframe: DataFrame, metadata: dict) -> DataFrame:

View File

@ -15,6 +15,9 @@ from freqtrade.util.formatters import decimals_per_coin, fmt_coin, round_value
from freqtrade.util.ft_precise import FtPrecise from freqtrade.util.ft_precise import FtPrecise
from freqtrade.util.measure_time import MeasureTime from freqtrade.util.measure_time import MeasureTime
from freqtrade.util.periodic_cache import PeriodicCache from freqtrade.util.periodic_cache import PeriodicCache
from freqtrade.util.progress_tracker import get_progress_tracker # noqa F401
from freqtrade.util.rich_progress import CustomProgress
from freqtrade.util.rich_tables import print_df_rich_table, print_rich_table
from freqtrade.util.template_renderer import render_template, render_template_with_fallback # noqa from freqtrade.util.template_renderer import render_template, render_template_with_fallback # noqa
@ -36,4 +39,7 @@ __all__ = [
"round_value", "round_value",
"fmt_coin", "fmt_coin",
"MeasureTime", "MeasureTime",
"print_rich_table",
"print_df_rich_table",
"CustomProgress",
] ]

View File

@ -0,0 +1,28 @@
from rich.progress import (
BarColumn,
MofNCompleteColumn,
TaskProgressColumn,
TextColumn,
TimeElapsedColumn,
TimeRemainingColumn,
)
from freqtrade.util.rich_progress import CustomProgress
def get_progress_tracker(**kwargs):
"""
Get progress Bar with custom columns.
"""
return CustomProgress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
TaskProgressColumn(),
"",
TimeElapsedColumn(),
"",
TimeRemainingColumn(),
expand=True,
**kwargs,
)

View File

@ -0,0 +1,14 @@
from typing import Union
from rich.console import ConsoleRenderable, Group, RichCast
from rich.progress import Progress
class CustomProgress(Progress):
def __init__(self, *args, cust_objs=[], **kwargs) -> None:
self._cust_objs = cust_objs
super().__init__(*args, **kwargs)
def get_renderable(self) -> Union[ConsoleRenderable, RichCast, str]:
renderable = Group(*self._cust_objs, *self.get_renderables())
return renderable

View File

@ -0,0 +1,77 @@
import sys
from typing import Any, Dict, List, Optional, Sequence, Union
from pandas import DataFrame
from rich.console import Console
from rich.table import Column, Table
from rich.text import Text
TextOrString = Union[str, Text]
def print_rich_table(
tabular_data: Sequence[Union[Dict[str, Any], Sequence[TextOrString]]],
headers: Sequence[str],
summary: Optional[str] = None,
*,
justify="right",
table_kwargs: Optional[Dict[str, Any]] = None,
) -> None:
table = Table(
*[c if isinstance(c, Column) else Column(c, justify=justify) for c in headers],
title=summary,
**(table_kwargs or {}),
)
for row in tabular_data:
if isinstance(row, dict):
table.add_row(
*[
row[header] if isinstance(row[header], Text) else str(row[header])
for header in headers
]
)
else:
row_to_add: List[Union[str, Text]] = [r if isinstance(r, Text) else str(r) for r in row]
table.add_row(*row_to_add)
console = Console(
width=200 if "pytest" in sys.modules else None,
)
console.print(table)
def _format_value(value: Any, *, floatfmt: str) -> str:
if isinstance(value, float):
return f"{value:{floatfmt}}"
return str(value)
def print_df_rich_table(
tabular_data: DataFrame,
headers: Sequence[str],
summary: Optional[str] = None,
*,
show_index=False,
index_name: Optional[str] = None,
table_kwargs: Optional[Dict[str, Any]] = None,
) -> None:
table = Table(title=summary, **(table_kwargs or {}))
if show_index:
index_name = str(index_name) if index_name else tabular_data.index.name
table.add_column(index_name)
for header in headers:
table.add_column(header, justify="right")
for value_list in tabular_data.itertuples(index=show_index):
row = [_format_value(x, floatfmt=".3f") for x in value_list]
table.add_row(*row)
console = Console(
width=200 if "pytest" in sys.modules else None,
)
console.print(table)

View File

@ -48,6 +48,7 @@ nav:
- Recursive analysis: recursive-analysis.md - Recursive analysis: recursive-analysis.md
- Advanced Strategy: strategy-advanced.md - Advanced Strategy: strategy-advanced.md
- Advanced Hyperopt: advanced-hyperopt.md - Advanced Hyperopt: advanced-hyperopt.md
- Orderflow: advanced-orderflow.md
- Producer/Consumer mode: producer-consumer.md - Producer/Consumer mode: producer-consumer.md
- SQL Cheat-sheet: sql_cheatsheet.md - SQL Cheat-sheet: sql_cheatsheet.md
- Edge Positioning: edge.md - Edge Positioning: edge.md

View File

@ -7,11 +7,11 @@
-r docs/requirements-docs.txt -r docs/requirements-docs.txt
coveralls==4.0.1 coveralls==4.0.1
ruff==0.5.1 ruff==0.5.4
mypy==1.10.1 mypy==1.11.0
pre-commit==3.7.1 pre-commit==3.7.1
pytest==8.2.2 pytest==8.3.1
pytest-asyncio==0.23.7 pytest-asyncio==0.23.8
pytest-cov==5.0.0 pytest-cov==5.0.0
pytest-mock==3.14.0 pytest-mock==3.14.0
pytest-random-order==1.1.1 pytest-random-order==1.1.1
@ -25,8 +25,8 @@ time-machine==2.14.2
nbconvert==7.16.4 nbconvert==7.16.4
# mypy types # mypy types
types-cachetools==5.3.0.7 types-cachetools==5.4.0.20240717
types-filelock==3.2.7 types-filelock==3.2.7
types-requests==2.32.0.20240622 types-requests==2.32.0.20240712
types-tabulate==0.9.0.20240106 types-tabulate==0.9.0.20240106
types-python-dateutil==2.9.0.20240316 types-python-dateutil==2.9.0.20240316

View File

@ -4,18 +4,18 @@ bottleneck==1.4.0
numexpr==2.10.1 numexpr==2.10.1
pandas-ta==0.3.14b pandas-ta==0.3.14b
ccxt==4.3.58 ccxt==4.3.65
cryptography==42.0.8 cryptography==43.0.0
aiohttp==3.9.5 aiohttp==3.9.5
SQLAlchemy==2.0.31 SQLAlchemy==2.0.31
python-telegram-bot==21.3 python-telegram-bot==21.4
# can't be hard-pinned due to telegram-bot pinning httpx with ~ # can't be hard-pinned due to telegram-bot pinning httpx with ~
httpx>=0.24.1 httpx>=0.24.1
humanize==4.9.0 humanize==4.10.0
cachetools==5.3.3 cachetools==5.4.0
requests==2.32.3 requests==2.32.3
urllib3==2.2.2 urllib3==2.2.2
jsonschema==4.22.0 jsonschema==4.23.0
TA-Lib==0.4.32 TA-Lib==0.4.32
technical==1.4.3 technical==1.4.3
tabulate==0.9.0 tabulate==0.9.0
@ -24,7 +24,7 @@ jinja2==3.1.4
tables==3.9.1 tables==3.9.1
joblib==1.4.2 joblib==1.4.2
rich==13.7.1 rich==13.7.1
pyarrow==16.1.0; platform_machine != 'armv7l' pyarrow==17.0.0; platform_machine != 'armv7l'
# find first, C search in arrays # find first, C search in arrays
py_find_1st==1.1.6 py_find_1st==1.1.6
@ -38,15 +38,13 @@ orjson==3.10.6
sdnotify==0.3.2 sdnotify==0.3.2
# API Server # API Server
fastapi==0.111.0 fastapi==0.111.1
pydantic==2.8.2 pydantic==2.8.2
uvicorn==0.30.1 uvicorn==0.30.3
pyjwt==2.8.0 pyjwt==2.8.0
aiofiles==24.1.0 aiofiles==24.1.0
psutil==6.0.0 psutil==6.0.0
# Support for colorized terminal output
colorama==0.4.6
# Building config files interactively # Building config files interactively
questionary==2.0.1 questionary==2.0.1
prompt-toolkit==3.0.36 prompt-toolkit==3.0.36

View File

@ -88,7 +88,6 @@ setup(
"py_find_1st", "py_find_1st",
"python-rapidjson", "python-rapidjson",
"orjson", "orjson",
"colorama",
"jinja2", "jinja2",
"questionary", "questionary",
"prompt-toolkit", "prompt-toolkit",

View File

@ -116,7 +116,7 @@ def test_list_exchanges(capsys):
start_list_exchanges(get_args(args)) start_list_exchanges(get_args(args))
captured = capsys.readouterr() captured = capsys.readouterr()
assert re.match(r"Exchanges available for Freqtrade.*", captured.out) assert re.search(r".*Exchanges available for Freqtrade.*", captured.out)
assert re.search(r".*binance.*", captured.out) assert re.search(r".*binance.*", captured.out)
assert re.search(r".*bybit.*", captured.out) assert re.search(r".*bybit.*", captured.out)
@ -139,7 +139,7 @@ def test_list_exchanges(capsys):
start_list_exchanges(get_args(args)) start_list_exchanges(get_args(args))
captured = capsys.readouterr() captured = capsys.readouterr()
assert re.match(r"All exchanges supported by the ccxt library.*", captured.out) assert re.search(r"All exchanges supported by the ccxt library.*", captured.out)
assert re.search(r".*binance.*", captured.out) assert re.search(r".*binance.*", captured.out)
assert re.search(r".*bingx.*", captured.out) assert re.search(r".*bingx.*", captured.out)
assert re.search(r".*bitmex.*", captured.out) assert re.search(r".*bitmex.*", captured.out)
@ -293,7 +293,7 @@ def test_list_markets(mocker, markets_static, capsys):
pargs["config"] = None pargs["config"] = None
start_list_markets(pargs, False) start_list_markets(pargs, False)
captured = capsys.readouterr() captured = capsys.readouterr()
assert re.match("\nExchange Binance has 12 active markets:\n", captured.out) assert re.search(r".*Exchange Binance has 12 active markets.*", captured.out)
patch_exchange(mocker, api_mock=api_mock, exchange="binance", mock_markets=markets_static) patch_exchange(mocker, api_mock=api_mock, exchange="binance", mock_markets=markets_static)
# Test with --all: all markets # Test with --all: all markets
@ -491,7 +491,7 @@ def test_list_markets(mocker, markets_static, capsys):
] ]
start_list_markets(get_args(args), False) start_list_markets(get_args(args), False)
captured = capsys.readouterr() captured = capsys.readouterr()
assert "Exchange Binance has 12 active markets:\n" in captured.out assert "Exchange Binance has 12 active markets" in captured.out
# Test tabular output, no markets found # Test tabular output, no markets found
args = [ args = [
@ -1633,8 +1633,8 @@ def test_start_list_data(testdatadir, capsys):
start_list_data(pargs) start_list_data(pargs)
captured = capsys.readouterr() captured = capsys.readouterr()
assert "Found 16 pair / timeframe combinations." in captured.out assert "Found 16 pair / timeframe combinations." in captured.out
assert "\n| Pair | Timeframe | Type |\n" in captured.out assert re.search(r".*Pair.*Timeframe.*Type.*\n", captured.out)
assert "\n| UNITTEST/BTC | 1m, 5m, 8m, 30m | spot |\n" in captured.out assert re.search(r"\n.* UNITTEST/BTC .* 1m, 5m, 8m, 30m .* spot |\n", captured.out)
args = [ args = [
"list-data", "list-data",
@ -1650,9 +1650,9 @@ def test_start_list_data(testdatadir, capsys):
start_list_data(pargs) start_list_data(pargs)
captured = capsys.readouterr() captured = capsys.readouterr()
assert "Found 2 pair / timeframe combinations." in captured.out assert "Found 2 pair / timeframe combinations." in captured.out
assert "\n| Pair | Timeframe | Type |\n" in captured.out assert re.search(r".*Pair.*Timeframe.*Type.*\n", captured.out)
assert "UNITTEST/BTC" not in captured.out assert "UNITTEST/BTC" not in captured.out
assert "\n| XRP/ETH | 1m, 5m | spot |\n" in captured.out assert re.search(r"\n.* XRP/ETH .* 1m, 5m .* spot |\n", captured.out)
args = [ args = [
"list-data", "list-data",
@ -1667,9 +1667,9 @@ def test_start_list_data(testdatadir, capsys):
captured = capsys.readouterr() captured = capsys.readouterr()
assert "Found 6 pair / timeframe combinations." in captured.out assert "Found 6 pair / timeframe combinations." in captured.out
assert "\n| Pair | Timeframe | Type |\n" in captured.out assert re.search(r".*Pair.*Timeframe.*Type.*\n", captured.out)
assert "\n| XRP/USDT:USDT | 5m, 1h | futures |\n" in captured.out assert re.search(r"\n.* XRP/USDT:USDT .* 5m, 1h .* futures |\n", captured.out)
assert "\n| XRP/USDT:USDT | 1h, 8h | mark |\n" in captured.out assert re.search(r"\n.* XRP/USDT:USDT .* 1h, 8h .* mark |\n", captured.out)
args = [ args = [
"list-data", "list-data",
@ -1684,15 +1684,12 @@ def test_start_list_data(testdatadir, capsys):
start_list_data(pargs) start_list_data(pargs)
captured = capsys.readouterr() captured = capsys.readouterr()
assert "Found 2 pair / timeframe combinations." in captured.out assert "Found 2 pair / timeframe combinations." in captured.out
assert ( assert re.search(r".*Pair.*Timeframe.*Type.*From .* To .* Candles .*\n", captured.out)
"\n| Pair | Timeframe | Type "
"| From | To | Candles |\n"
) in captured.out
assert "UNITTEST/BTC" not in captured.out assert "UNITTEST/BTC" not in captured.out
assert ( assert re.search(
"\n| XRP/ETH | 1m | spot | " r"\n.* XRP/USDT .* 1m .* spot .* 2019-10-11 00:00:00 .* 2019-10-13 11:19:00 .* 2469 |\n",
"2019-10-11 00:00:00 | 2019-10-13 11:19:00 | 2469 |\n" captured.out,
) in captured.out )
@pytest.mark.usefixtures("init_persistence") @pytest.mark.usefixtures("init_persistence")

View File

@ -614,6 +614,7 @@ def get_default_conf(testdatadir):
"internals": {}, "internals": {},
"export": "none", "export": "none",
"dataformat_ohlcv": "feather", "dataformat_ohlcv": "feather",
"dataformat_trades": "feather",
"runmode": "dry_run", "runmode": "dry_run",
"candle_type_def": CandleType.SPOT, "candle_type_def": CandleType.SPOT,
} }

View File

@ -324,7 +324,8 @@ def hyperopt_test_result():
"profit_mean": None, "profit_mean": None,
"profit_median": None, "profit_median": None,
"profit_total": 0, "profit_total": 0,
"profit": 0.0, "max_drawdown_account": 0.0,
"max_drawdown_abs": 0.0,
"holding_avg": timedelta(), "holding_avg": timedelta(),
}, # noqa: E501 }, # noqa: E501
"results_explanation": " 0 trades. Avg profit nan%. Total profit 0.00000000 BTC ( 0.00Σ%). Avg duration nan min.", # noqa: E501 "results_explanation": " 0 trades. Avg profit nan%. Total profit 0.00000000 BTC ( 0.00Σ%). Avg duration nan min.", # noqa: E501

View File

@ -0,0 +1,483 @@
from collections import OrderedDict
import numpy as np
import pandas as pd
import pytest
from freqtrade.constants import DEFAULT_TRADES_COLUMNS
from freqtrade.data.converter import populate_dataframe_with_trades
from freqtrade.data.converter.orderflow import trades_to_volumeprofile_with_total_delta_bid_ask
from freqtrade.data.converter.trade_converter import trades_list_to_df
BIN_SIZE_SCALE = 0.5
def read_csv(filename, converter_columns: list = ["side", "type"]):
return pd.read_csv(
filename,
skipinitialspace=True,
index_col=0,
parse_dates=True,
date_format="ISO8601",
converters={col: str.strip for col in converter_columns},
)
@pytest.fixture
def populate_dataframe_with_trades_dataframe(testdatadir):
return pd.read_feather(testdatadir / "orderflow/populate_dataframe_with_trades_DF.feather")
@pytest.fixture
def populate_dataframe_with_trades_trades(testdatadir):
return pd.read_feather(testdatadir / "orderflow/populate_dataframe_with_trades_TRADES.feather")
@pytest.fixture
def candles(testdatadir):
return pd.read_json(testdatadir / "orderflow/candles.json").copy()
@pytest.fixture
def public_trades_list(testdatadir):
return read_csv(testdatadir / "orderflow/public_trades_list.csv").copy()
@pytest.fixture
def public_trades_list_simple(testdatadir):
return read_csv(testdatadir / "orderflow/public_trades_list_simple_example.csv").copy()
def test_public_trades_columns_before_change(
populate_dataframe_with_trades_dataframe, populate_dataframe_with_trades_trades
):
assert populate_dataframe_with_trades_dataframe.columns.tolist() == [
"date",
"open",
"high",
"low",
"close",
"volume",
]
assert populate_dataframe_with_trades_trades.columns.tolist() == [
"timestamp",
"id",
"type",
"side",
"price",
"amount",
"cost",
"date",
]
def test_public_trades_mock_populate_dataframe_with_trades__check_orderflow(
populate_dataframe_with_trades_dataframe, populate_dataframe_with_trades_trades
):
"""
Tests the `populate_dataframe_with_trades` function's order flow calculation.
This test checks the generated data frame and order flow for specific properties
based on the provided configuration and sample data.
"""
# Create copies of the input data to avoid modifying the originals
dataframe = populate_dataframe_with_trades_dataframe.copy()
trades = populate_dataframe_with_trades_trades.copy()
# Convert the 'date' column to datetime format with milliseconds
dataframe["date"] = pd.to_datetime(dataframe["date"], unit="ms")
# Select the last rows and reset the index (optional, depends on usage)
dataframe = dataframe.copy().tail().reset_index(drop=True)
# Define the configuration for order flow calculation
config = {
"timeframe": "5m",
"orderflow": {
"cache_size": 1000,
"max_candles": 1500,
"scale": 0.005,
"imbalance_volume": 0,
"imbalance_ratio": 3,
"stacked_imbalance_range": 3,
},
}
# Apply the function to populate the data frame with order flow data
df, _ = populate_dataframe_with_trades(OrderedDict(), config, dataframe, trades)
# Extract results from the first row of the DataFrame
results = df.iloc[0]
t = results["trades"]
of = results["orderflow"]
# Assert basic properties of the results
assert 0 != len(results)
assert 151 == len(t)
# --- Order Flow Analysis ---
# Assert number of order flow data points
assert 23 == len(of) # Assert expected number of data points
assert isinstance(of, dict)
of_values = list(of.values())
# Assert specific order flow values at the beginning of the DataFrame
assert of_values[0] == {
"bid": 0.0,
"ask": 1.0,
"delta": 4.999,
"bid_amount": 0.0,
"ask_amount": 4.999,
"total_volume": 4.999,
"total_trades": 1,
}
# Assert specific order flow values at the end of the DataFrame (excluding last row)
assert of_values[-1] == {
"bid": 0.0,
"ask": 1.0,
"delta": 0.103,
"bid_amount": 0.0,
"ask_amount": 0.103,
"total_volume": 0.103,
"total_trades": 1,
}
# Extract order flow from the last row of the DataFrame
of = df.iloc[-1]["orderflow"]
# Assert number of order flow data points in the last row
assert 19 == len(of) # Assert expected number of data points
of_values1 = list(of.values())
# Assert specific order flow values at the beginning of the last row
assert of_values1[0] == {
"bid": 1.0,
"ask": 0.0,
"delta": -12.536,
"bid_amount": 12.536,
"ask_amount": 0.0,
"total_volume": 12.536,
"total_trades": 1,
}
# Assert specific order flow values at the end of the last row
assert pytest.approx(of_values1[-1]) == {
"bid": 4.0,
"ask": 3.0,
"delta": -40.948,
"bid_amount": 59.182,
"ask_amount": 18.23399,
"total_volume": 77.416,
"total_trades": 7,
}
# --- Delta and Other Results ---
# Assert delta value from the first row
assert pytest.approx(results["delta"]) == -50.519
# Assert min and max delta values from the first row
assert results["min_delta"] == -79.469
assert results["max_delta"] == 17.298
# Assert that stacked imbalances are NaN (not applicable in this test)
assert np.isnan(results["stacked_imbalances_bid"])
assert np.isnan(results["stacked_imbalances_ask"])
# Repeat assertions for the third from last row
results = df.iloc[-2]
assert pytest.approx(results["delta"]) == -20.862
assert pytest.approx(results["min_delta"]) == -54.559999
assert 82.842 == results["max_delta"]
assert 234.99 == results["stacked_imbalances_bid"]
assert 234.96 == results["stacked_imbalances_ask"]
# Repeat assertions for the last row
results = df.iloc[-1]
assert pytest.approx(results["delta"]) == -49.302
assert results["min_delta"] == -70.222
assert pytest.approx(results["max_delta"]) == 11.213
assert np.isnan(results["stacked_imbalances_bid"])
assert np.isnan(results["stacked_imbalances_ask"])
def test_public_trades_trades_mock_populate_dataframe_with_trades__check_trades(
populate_dataframe_with_trades_dataframe, populate_dataframe_with_trades_trades
):
"""
Tests the `populate_dataframe_with_trades` function's handling of trades,
ensuring correct integration of trades data into the generated DataFrame.
"""
# Create copies of the input data to avoid modifying the originals
dataframe = populate_dataframe_with_trades_dataframe.copy()
trades = populate_dataframe_with_trades_trades.copy()
# --- Data Preparation ---
# Convert the 'date' column to datetime format with milliseconds
dataframe["date"] = pd.to_datetime(dataframe["date"], unit="ms")
# Select the final row of the DataFrame
dataframe = dataframe.tail().reset_index(drop=True)
# Filter trades to those occurring after or at the same time as the first DataFrame date
trades = trades.loc[trades.date >= dataframe.date[0]]
trades.reset_index(inplace=True, drop=True) # Reset index for clarity
# Assert the first trade ID to ensure filtering worked correctly
assert trades["id"][0] == "313881442"
# --- Configuration and Function Call ---
# Define configuration for order flow calculation (used for context)
config = {
"timeframe": "5m",
"orderflow": {
"cache_size": 1000,
"max_candles": 1500,
"scale": 0.5,
"imbalance_volume": 0,
"imbalance_ratio": 3,
"stacked_imbalance_range": 3,
},
}
# Populate the DataFrame with trades and order flow data
df, _ = populate_dataframe_with_trades(OrderedDict(), config, dataframe, trades)
# --- DataFrame and Trade Data Validation ---
row = df.iloc[0] # Extract the first row for assertions
# Assert DataFrame structure
assert list(df.columns) == [
# ... (list of expected column names)
"date",
"open",
"high",
"low",
"close",
"volume",
"trades",
"orderflow",
"imbalances",
"stacked_imbalances_bid",
"stacked_imbalances_ask",
"max_delta",
"min_delta",
"bid",
"ask",
"delta",
"total_trades",
]
# Assert delta, bid, and ask values
assert pytest.approx(row["delta"]) == -50.519
assert row["bid"] == 219.961
assert row["ask"] == 169.442
# Assert the number of trades
assert len(row["trades"]) == 151
# Assert specific details of the first trade
t = row["trades"][0]
assert list(t.keys()) == ["timestamp", "id", "type", "side", "price", "amount", "cost", "date"]
assert trades["id"][0] == t["id"]
assert int(trades["timestamp"][0]) == int(t["timestamp"])
assert t["side"] == "sell"
assert t["id"] == "313881442"
assert t["price"] == 234.72
def test_public_trades_put_volume_profile_into_ohlcv_candles(public_trades_list_simple, candles):
"""
Tests the integration of volume profile data into OHLCV candles.
This test verifies that
the `trades_to_volumeprofile_with_total_delta_bid_ask`
function correctly calculates the volume profile and that
it correctly assigns the delta value from the volume profile to the
corresponding candle in the `candles` DataFrame.
"""
# Convert the trade list to a DataFrame
trades_df = trades_list_to_df(public_trades_list_simple[DEFAULT_TRADES_COLUMNS].values.tolist())
# Generate the volume profile with the specified bin size
df = trades_to_volumeprofile_with_total_delta_bid_ask(trades_df, scale=BIN_SIZE_SCALE)
# Assert the delta value in the total-bid/delta response of the second candle
assert 0.14 == df.values.tolist()[1][2]
# Alternative assertion using `.iat` accessor (assuming correct assignment logic)
assert 0.14 == df["delta"].iat[1]
def test_public_trades_binned_big_sample_list(public_trades_list):
"""
Tests the `trades_to_volumeprofile_with_total_delta_bid_ask` function
with different bin sizes and verifies the generated DataFrame's structure and values.
"""
# Define the bin size for the first test
BIN_SIZE_SCALE = 0.05
# Convert the trade list to a DataFrame
trades = trades_list_to_df(public_trades_list[DEFAULT_TRADES_COLUMNS].values.tolist())
# Generate the volume profile with the specified bin size
df = trades_to_volumeprofile_with_total_delta_bid_ask(trades, scale=BIN_SIZE_SCALE)
# Assert that the DataFrame has the expected columns
assert df.columns.tolist() == [
"bid",
"ask",
"delta",
"bid_amount",
"ask_amount",
"total_volume",
"total_trades",
]
# Assert the number of rows in the DataFrame (expected 23 for this bin size)
assert len(df) == 23
# Assert that the index values are in ascending order and spaced correctly
assert all(df.index[i] < df.index[i + 1] for i in range(len(df) - 1))
assert df.index[0] + BIN_SIZE_SCALE == df.index[1]
assert (trades["price"].min() - BIN_SIZE_SCALE) < df.index[0] < trades["price"].max()
assert (df.index[0] + BIN_SIZE_SCALE) >= df.index[1]
assert (trades["price"].max() - BIN_SIZE_SCALE) < df.index[-1] < trades["price"].max()
# Assert specific values in the first and last rows of the DataFrame
assert 32 == df["bid"].iloc[0] # bid price
assert 197.512 == df["bid_amount"].iloc[0] # total bid amount
assert 88.98 == df["ask_amount"].iloc[0] # total ask amount
assert 26 == df["ask"].iloc[0] # ask price
assert -108.532 == pytest.approx(df["delta"].iloc[0]) # delta (bid amount - ask amount)
assert 3 == df["bid"].iloc[-1] # bid price
assert 50.659 == df["bid_amount"].iloc[-1] # total bid amount
assert 108.21 == df["ask_amount"].iloc[-1] # total ask amount
assert 44 == df["ask"].iloc[-1] # ask price
assert 57.551 == df["delta"].iloc[-1] # delta (bid amount - ask amount)
# Repeat the process with a larger bin size
BIN_SIZE_SCALE = 1
# Generate the volume profile with the larger bin size
df = trades_to_volumeprofile_with_total_delta_bid_ask(trades, scale=BIN_SIZE_SCALE)
# Assert the number of rows in the DataFrame (expected 2 for this bin size)
assert len(df) == 2
# Repeat similar assertions for index ordering and spacing
assert all(df.index[i] < df.index[i + 1] for i in range(len(df) - 1))
assert (trades["price"].min() - BIN_SIZE_SCALE) < df.index[0] < trades["price"].max()
assert (df.index[0] + BIN_SIZE_SCALE) >= df.index[1]
assert (trades["price"].max() - BIN_SIZE_SCALE) < df.index[-1] < trades["price"].max()
# Assert the value in the last row of the DataFrame with the larger bin size
assert 1667.0 == df.index[-1]
assert 710.98 == df["bid_amount"].iat[0]
assert 111 == df["bid"].iat[0]
assert 52.7199999 == pytest.approx(df["delta"].iat[0]) # delta
def test_public_trades_config_max_trades(
default_conf, populate_dataframe_with_trades_dataframe, populate_dataframe_with_trades_trades
):
dataframe = populate_dataframe_with_trades_dataframe.copy()
trades = populate_dataframe_with_trades_trades.copy()
default_conf["exchange"]["use_public_trades"] = True
orderflow_config = {
"timeframe": "5m",
"orderflow": {
"cache_size": 1000,
"max_candles": 1,
"scale": 0.005,
"imbalance_volume": 0,
"imbalance_ratio": 3,
"stacked_imbalance_range": 3,
},
}
df, _ = populate_dataframe_with_trades(
OrderedDict(), default_conf | orderflow_config, dataframe, trades
)
assert df.delta.count() == 1
def test_public_trades_testdata_sanity(
candles,
public_trades_list,
public_trades_list_simple,
populate_dataframe_with_trades_dataframe,
populate_dataframe_with_trades_trades,
):
assert 10999 == len(candles)
assert 1000 == len(public_trades_list)
assert 999 == len(populate_dataframe_with_trades_dataframe)
assert 293532 == len(populate_dataframe_with_trades_trades)
assert 7 == len(public_trades_list_simple)
assert (
5
== public_trades_list_simple.loc[
public_trades_list_simple["side"].str.contains("sell"), "id"
].count()
)
assert (
2
== public_trades_list_simple.loc[
public_trades_list_simple["side"].str.contains("buy"), "id"
].count()
)
assert public_trades_list.columns.tolist() == [
"timestamp",
"id",
"type",
"side",
"price",
"amount",
"cost",
"date",
]
assert public_trades_list.columns.tolist() == [
"timestamp",
"id",
"type",
"side",
"price",
"amount",
"cost",
"date",
]
assert public_trades_list_simple.columns.tolist() == [
"timestamp",
"id",
"type",
"side",
"price",
"amount",
"cost",
"date",
]
assert populate_dataframe_with_trades_dataframe.columns.tolist() == [
"date",
"open",
"high",
"low",
"close",
"volume",
]
assert populate_dataframe_with_trades_trades.columns.tolist() == [
"timestamp",
"id",
"type",
"side",
"price",
"amount",
"cost",
"date",
]

View File

@ -62,6 +62,42 @@ def test_historic_ohlcv(mocker, default_conf, ohlcv_history):
assert historymock.call_args_list[0][1]["timeframe"] == "5m" assert historymock.call_args_list[0][1]["timeframe"] == "5m"
def test_historic_trades(mocker, default_conf, trades_history_df):
historymock = MagicMock(return_value=trades_history_df)
mocker.patch(
"freqtrade.data.history.datahandlers.featherdatahandler.FeatherDataHandler._trades_load",
historymock,
)
dp = DataProvider(default_conf, None)
# Live mode..
with pytest.raises(OperationalException, match=r"Exchange is not available to DataProvider\."):
dp.trades("UNITTEST/BTC", "5m")
exchange = get_patched_exchange(mocker, default_conf)
dp = DataProvider(default_conf, exchange)
data = dp.trades("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame)
assert len(data) == 0
# Switch to backtest mode
default_conf["runmode"] = RunMode.BACKTEST
default_conf["dataformat_trades"] = "feather"
exchange = get_patched_exchange(mocker, default_conf)
dp = DataProvider(default_conf, exchange)
data = dp.trades("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame)
assert len(data) == len(trades_history_df)
# Random other runmode
default_conf["runmode"] = RunMode.UTIL_EXCHANGE
dp = DataProvider(default_conf, None)
data = dp.trades("UNITTEST/BTC", "5m")
assert isinstance(data, DataFrame)
assert len(data) == 0
def test_historic_ohlcv_dataformat(mocker, default_conf, ohlcv_history): def test_historic_ohlcv_dataformat(mocker, default_conf, ohlcv_history):
hdf5loadmock = MagicMock(return_value=ohlcv_history) hdf5loadmock = MagicMock(return_value=ohlcv_history)
featherloadmock = MagicMock(return_value=ohlcv_history) featherloadmock = MagicMock(return_value=ohlcv_history)
@ -247,8 +283,8 @@ def test_emit_df(mocker, default_conf, ohlcv_history):
def test_refresh(mocker, default_conf): def test_refresh(mocker, default_conf):
refresh_mock = MagicMock() refresh_mock = mocker.patch(f"{EXMS}.refresh_latest_ohlcv")
mocker.patch(f"{EXMS}.refresh_latest_ohlcv", refresh_mock) mock_refresh_trades = mocker.patch(f"{EXMS}.refresh_latest_trades")
exchange = get_patched_exchange(mocker, default_conf, exchange="binance") exchange = get_patched_exchange(mocker, default_conf, exchange="binance")
timeframe = default_conf["timeframe"] timeframe = default_conf["timeframe"]
@ -258,7 +294,7 @@ def test_refresh(mocker, default_conf):
dp = DataProvider(default_conf, exchange) dp = DataProvider(default_conf, exchange)
dp.refresh(pairs) dp.refresh(pairs)
assert mock_refresh_trades.call_count == 0
assert refresh_mock.call_count == 1 assert refresh_mock.call_count == 1
assert len(refresh_mock.call_args[0]) == 1 assert len(refresh_mock.call_args[0]) == 1
assert len(refresh_mock.call_args[0][0]) == len(pairs) assert len(refresh_mock.call_args[0][0]) == len(pairs)
@ -266,11 +302,20 @@ def test_refresh(mocker, default_conf):
refresh_mock.reset_mock() refresh_mock.reset_mock()
dp.refresh(pairs, pairs_non_trad) dp.refresh(pairs, pairs_non_trad)
assert mock_refresh_trades.call_count == 0
assert refresh_mock.call_count == 1 assert refresh_mock.call_count == 1
assert len(refresh_mock.call_args[0]) == 1 assert len(refresh_mock.call_args[0]) == 1
assert len(refresh_mock.call_args[0][0]) == len(pairs) + len(pairs_non_trad) assert len(refresh_mock.call_args[0][0]) == len(pairs) + len(pairs_non_trad)
assert refresh_mock.call_args[0][0] == pairs + pairs_non_trad assert refresh_mock.call_args[0][0] == pairs + pairs_non_trad
# Test with public trades
refresh_mock.reset_mock()
refresh_mock.reset_mock()
default_conf["exchange"]["use_public_trades"] = True
dp.refresh(pairs, pairs_non_trad)
assert mock_refresh_trades.call_count == 1
assert refresh_mock.call_count == 1
def test_orderbook(mocker, default_conf, order_book_l2): def test_orderbook(mocker, default_conf, order_book_l2):
api_mock = MagicMock() api_mock = MagicMock()

View File

@ -154,10 +154,10 @@ def test_backtest_analysis_nomock(default_conf, mocker, caplog, testdatadir, use
assert "-3.5" in captured.out assert "-3.5" in captured.out
assert "50" in captured.out assert "50" in captured.out
assert "0" in captured.out assert "0" in captured.out
assert "0.01616" in captured.out assert "0.016" in captured.out
assert "34.049" in captured.out assert "34.049" in captured.out
assert "0.104411" in captured.out assert "0.104" in captured.out
assert "52.8292" in captured.out assert "52.829" in captured.out
# test group 1 # test group 1
args = get_args(base_args + ["--analysis-groups", "1"]) args = get_args(base_args + ["--analysis-groups", "1"])

View File

@ -151,9 +151,7 @@ def test_load_data_with_new_pair_1min(
) )
load_pair_history(datadir=tmp_path, timeframe="1m", pair="MEME/BTC", candle_type=candle_type) load_pair_history(datadir=tmp_path, timeframe="1m", pair="MEME/BTC", candle_type=candle_type)
assert file.is_file() assert file.is_file()
assert log_has_re( assert log_has_re(r'Download history data for "MEME/BTC", 1m, ' r"spot and store in .*", caplog)
r'\(0/1\) - Download history data for "MEME/BTC", 1m, ' r"spot and store in .*", caplog
)
def test_testdata_path(testdatadir) -> None: def test_testdata_path(testdatadir) -> None:
@ -677,7 +675,7 @@ def test_download_trades_history(
assert not _download_trades_history( assert not _download_trades_history(
data_handler=data_handler, exchange=exchange, pair="ETH/BTC", trading_mode=TradingMode.SPOT data_handler=data_handler, exchange=exchange, pair="ETH/BTC", trading_mode=TradingMode.SPOT
) )
assert log_has_re('Failed to download historic trades for pair: "ETH/BTC".*', caplog) assert log_has_re('Failed to download and store historic trades for pair: "ETH/BTC".*', caplog)
file2 = tmp_path / "XRP_ETH-trades.json.gz" file2 = tmp_path / "XRP_ETH-trades.json.gz"
copyfile(testdatadir / file2.name, file2) copyfile(testdatadir / file2.name, file2)

View File

@ -600,7 +600,7 @@ async def test__async_get_historic_ohlcv_binance(default_conf, mocker, caplog, c
@pytest.mark.parametrize( @pytest.mark.parametrize(
"pair,nominal_value,mm_ratio,amt", "pair,notional_value,mm_ratio,amt",
[ [
("XRP/USDT:USDT", 0.0, 0.025, 0), ("XRP/USDT:USDT", 0.0, 0.025, 0),
("BNB/USDT:USDT", 100.0, 0.0065, 0), ("BNB/USDT:USDT", 100.0, 0.0065, 0),
@ -615,12 +615,12 @@ def test_get_maintenance_ratio_and_amt_binance(
mocker, mocker,
leverage_tiers, leverage_tiers,
pair, pair,
nominal_value, notional_value,
mm_ratio, mm_ratio,
amt, amt,
): ):
mocker.patch(f"{EXMS}.exchange_has", return_value=True) mocker.patch(f"{EXMS}.exchange_has", return_value=True)
exchange = get_patched_exchange(mocker, default_conf, exchange="binance") exchange = get_patched_exchange(mocker, default_conf, exchange="binance")
exchange._leverage_tiers = leverage_tiers exchange._leverage_tiers = leverage_tiers
(result_ratio, result_amt) = exchange.get_maintenance_ratio_and_amt(pair, nominal_value) (result_ratio, result_amt) = exchange.get_maintenance_ratio_and_amt(pair, notional_value)
assert (round(result_ratio, 8), round(result_amt, 8)) == (mm_ratio, amt) assert (round(result_ratio, 8), round(result_amt, 8)) == (mm_ratio, amt)

View File

@ -8,8 +8,9 @@ from unittest.mock import MagicMock, Mock, PropertyMock, patch
import ccxt import ccxt
import pytest import pytest
from numpy import nan from numpy import nan
from pandas import DataFrame from pandas import DataFrame, to_datetime
from freqtrade.constants import DEFAULT_DATAFRAME_COLUMNS
from freqtrade.enums import CandleType, MarginMode, RunMode, TradingMode from freqtrade.enums import CandleType, MarginMode, RunMode, TradingMode
from freqtrade.exceptions import ( from freqtrade.exceptions import (
ConfigurationError, ConfigurationError,
@ -325,6 +326,22 @@ def test_validate_order_time_in_force(default_conf, mocker, caplog):
ex.validate_order_time_in_force(tif2) ex.validate_order_time_in_force(tif2)
def test_validate_orderflow(default_conf, mocker, caplog):
caplog.set_level(logging.INFO)
# Test bybit - as it doesn't support historic trades data.
ex = get_patched_exchange(mocker, default_conf, exchange="bybit")
mocker.patch(f"{EXMS}.exchange_has", return_value=True)
ex.validate_orderflow({"use_public_trades": False})
with pytest.raises(ConfigurationError, match=r"Trade data not available for.*"):
ex.validate_orderflow({"use_public_trades": True})
# Binance supports orderflow.
ex = get_patched_exchange(mocker, default_conf, exchange="binance")
ex.validate_orderflow({"use_public_trades": False})
ex.validate_orderflow({"use_public_trades": True})
@pytest.mark.parametrize( @pytest.mark.parametrize(
"price,precision_mode,precision,expected", "price,precision_mode,precision,expected",
[ [
@ -2371,6 +2388,163 @@ def test_refresh_latest_ohlcv(mocker, default_conf, caplog, candle_type) -> None
assert len(res) == 1 assert len(res) == 1
@pytest.mark.parametrize("candle_type", [CandleType.FUTURES, CandleType.SPOT])
def test_refresh_latest_trades(
mocker, default_conf, caplog, candle_type, tmp_path, time_machine
) -> None:
time_machine.move_to(dt_now(), tick=False)
trades = [
{
# unix timestamp ms
"timestamp": dt_ts(dt_now() - timedelta(minutes=5)),
"amount": 16.512,
"cost": 10134.07488,
"fee": None,
"fees": [],
"id": "354669639",
"order": None,
"price": 613.74,
"side": "sell",
"takerOrMaker": None,
"type": None,
},
{
"timestamp": dt_ts(), # unix timestamp ms
"amount": 12.512,
"cost": 1000,
"fee": None,
"fees": [],
"id": "354669640",
"order": None,
"price": 613.84,
"side": "buy",
"takerOrMaker": None,
"type": None,
},
]
caplog.set_level(logging.DEBUG)
use_trades_conf = default_conf
use_trades_conf["exchange"]["use_public_trades"] = True
use_trades_conf["datadir"] = tmp_path
use_trades_conf["orderflow"] = {"max_candles": 1500}
exchange = get_patched_exchange(mocker, use_trades_conf)
exchange._api_async.fetch_trades = get_mock_coro(trades)
exchange._ft_has["exchange_has_overrides"]["fetchTrades"] = True
pairs = [("IOTA/USDT:USDT", "5m", candle_type), ("XRP/USDT:USDT", "5m", candle_type)]
# empty dicts
assert not exchange._trades
res = exchange.refresh_latest_trades(pairs, cache=False)
# No caching
assert not exchange._trades
assert len(res) == len(pairs)
assert exchange._api_async.fetch_trades.call_count == 4
exchange._api_async.fetch_trades.reset_mock()
exchange.required_candle_call_count = 2
res = exchange.refresh_latest_trades(pairs)
assert len(res) == len(pairs)
assert log_has(f"Refreshing TRADES data for {len(pairs)} pairs", caplog)
assert exchange._trades
assert exchange._api_async.fetch_trades.call_count == 4
exchange._api_async.fetch_trades.reset_mock()
for pair in pairs:
assert isinstance(exchange.trades(pair), DataFrame)
assert len(exchange.trades(pair)) > 0
# trades function should return a different object on each call
# if copy is "True"
assert exchange.trades(pair) is not exchange.trades(pair)
assert exchange.trades(pair) is not exchange.trades(pair, copy=True)
assert exchange.trades(pair, copy=True) is not exchange.trades(pair, copy=True)
assert exchange.trades(pair, copy=False) is exchange.trades(pair, copy=False)
# test caching
ohlcv = [
[
dt_ts(dt_now() - timedelta(minutes=5)), # unix timestamp ms
1, # open
2, # high
3, # low
4, # close
5, # volume (in quote currency)
],
[
dt_ts(), # unix timestamp ms
3, # open
1, # high
4, # low
6, # close
5, # volume (in quote currency)
],
]
cols = DEFAULT_DATAFRAME_COLUMNS
trades_df = DataFrame(ohlcv, columns=cols)
trades_df["date"] = to_datetime(trades_df["date"], unit="ms", utc=True)
trades_df["date"] = trades_df["date"].apply(lambda date: timeframe_to_prev_date("5m", date))
exchange._klines[pair] = trades_df
res = exchange.refresh_latest_trades(
[("IOTA/USDT:USDT", "5m", candle_type), ("XRP/USDT:USDT", "5m", candle_type)]
)
assert len(res) == 0
assert exchange._api_async.fetch_trades.call_count == 0
caplog.clear()
# Reset refresh times
for pair in pairs:
# test caching with "expired" candle
trades = [
{
# unix timestamp ms
"timestamp": dt_ts(exchange._klines[pair].iloc[-1].date - timedelta(minutes=5)),
"amount": 16.512,
"cost": 10134.07488,
"fee": None,
"fees": [],
"id": "354669639",
"order": None,
"price": 613.74,
"side": "sell",
"takerOrMaker": None,
"type": None,
}
]
trades_df = DataFrame(trades)
trades_df["date"] = to_datetime(trades_df["timestamp"], unit="ms", utc=True)
exchange._trades[pair] = trades_df
res = exchange.refresh_latest_trades(
[("IOTA/USDT:USDT", "5m", candle_type), ("XRP/USDT:USDT", "5m", candle_type)]
)
assert len(res) == len(pairs)
assert exchange._api_async.fetch_trades.call_count == 4
# cache - but disabled caching
exchange._api_async.fetch_trades.reset_mock()
exchange.required_candle_call_count = 1
pairlist = [
("IOTA/ETH", "5m", candle_type),
("XRP/ETH", "5m", candle_type),
("XRP/ETH", "1d", candle_type),
]
res = exchange.refresh_latest_trades(pairlist, cache=False)
assert len(res) == 3
assert exchange._api_async.fetch_trades.call_count == 6
# Test the same again, should NOT return from cache!
exchange._api_async.fetch_trades.reset_mock()
res = exchange.refresh_latest_trades(pairlist, cache=False)
assert len(res) == 3
assert exchange._api_async.fetch_trades.call_count == 6
exchange._api_async.fetch_trades.reset_mock()
caplog.clear()
@pytest.mark.parametrize("candle_type", [CandleType.FUTURES, CandleType.MARK, CandleType.SPOT]) @pytest.mark.parametrize("candle_type", [CandleType.FUTURES, CandleType.MARK, CandleType.SPOT])
def test_refresh_latest_ohlcv_cache(mocker, default_conf, candle_type, time_machine) -> None: def test_refresh_latest_ohlcv_cache(mocker, default_conf, candle_type, time_machine) -> None:
start = datetime(2021, 8, 1, 0, 0, 0, 0, tzinfo=timezone.utc) start = datetime(2021, 8, 1, 0, 0, 0, 0, tzinfo=timezone.utc)

View File

@ -291,9 +291,10 @@ def test_log_results_if_loss_improves(hyperopt, capsys) -> None:
"is_best": True, "is_best": True,
} }
) )
hyperopt._hyper_out.print()
out, _err = capsys.readouterr() out, _err = capsys.readouterr()
assert all( assert all(
x in out for x in ["Best", "2/2", " 1", "0.10%", "0.00100000 BTC (1.00%)", "00:20:00"] x in out for x in ["Best", "2/2", "1", "0.10%", "0.00100000 BTC (1.00%)", "0:20:00"]
) )

View File

@ -147,7 +147,7 @@ def test_lookahead_helper_text_table_lookahead_analysis_instances(lookahead_conf
instance = LookaheadAnalysis(lookahead_conf, strategy_obj) instance = LookaheadAnalysis(lookahead_conf, strategy_obj)
instance.current_analysis = analysis instance.current_analysis = analysis
_table, _headers, data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances( data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances(
lookahead_conf, [instance] lookahead_conf, [instance]
) )
@ -163,14 +163,14 @@ def test_lookahead_helper_text_table_lookahead_analysis_instances(lookahead_conf
analysis.false_exit_signals = 10 analysis.false_exit_signals = 10
instance = LookaheadAnalysis(lookahead_conf, strategy_obj) instance = LookaheadAnalysis(lookahead_conf, strategy_obj)
instance.current_analysis = analysis instance.current_analysis = analysis
_table, _headers, data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances( data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances(
lookahead_conf, [instance] lookahead_conf, [instance]
) )
assert data[0][2].__contains__("error") assert data[0][2].__contains__("error")
# edit it into not showing an error # edit it into not showing an error
instance.failed_bias_check = False instance.failed_bias_check = False
_table, _headers, data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances( data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances(
lookahead_conf, [instance] lookahead_conf, [instance]
) )
assert data[0][0] == "strategy_test_v3_with_lookahead_bias.py" assert data[0][0] == "strategy_test_v3_with_lookahead_bias.py"
@ -183,7 +183,7 @@ def test_lookahead_helper_text_table_lookahead_analysis_instances(lookahead_conf
analysis.false_indicators.append("falseIndicator1") analysis.false_indicators.append("falseIndicator1")
analysis.false_indicators.append("falseIndicator2") analysis.false_indicators.append("falseIndicator2")
_table, _headers, data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances( data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances(
lookahead_conf, [instance] lookahead_conf, [instance]
) )
@ -193,7 +193,7 @@ def test_lookahead_helper_text_table_lookahead_analysis_instances(lookahead_conf
assert len(data) == 1 assert len(data) == 1
# check amount of multiple rows # check amount of multiple rows
_table, _headers, data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances( data = LookaheadAnalysisSubFunctions.text_table_lookahead_analysis_instances(
lookahead_conf, [instance, instance, instance] lookahead_conf, [instance, instance, instance]
) )
assert len(data) == 3 assert len(data) == 3

View File

@ -59,7 +59,7 @@ def _backup_file(file: Path, copy_file: bool = False) -> None:
copyfile(file_swp, file) copyfile(file_swp, file)
def test_text_table_bt_results(): def test_text_table_bt_results(capsys):
results = pd.DataFrame( results = pd.DataFrame(
{ {
"pair": ["ETH/BTC", "ETH/BTC", "ETH/BTC"], "pair": ["ETH/BTC", "ETH/BTC", "ETH/BTC"],
@ -72,7 +72,8 @@ def test_text_table_bt_results():
pair_results = generate_pair_metrics( pair_results = generate_pair_metrics(
["ETH/BTC"], stake_currency="BTC", starting_balance=4, results=results ["ETH/BTC"], stake_currency="BTC", starting_balance=4, results=results
) )
text = text_table_bt_results(pair_results, stake_currency="BTC") text_table_bt_results(pair_results, stake_currency="BTC", title="title")
text = capsys.readouterr().out
re.search( re.search(
r".* Pair .* Trades .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* " r".* Pair .* Trades .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* "
r"Avg Duration .* Win Draw Loss Win% .*", r"Avg Duration .* Win Draw Loss Win% .*",
@ -435,7 +436,7 @@ def test_calc_streak(testdatadir):
assert calc_streak(bt_data) == (7, 18) assert calc_streak(bt_data) == (7, 18)
def test_text_table_exit_reason(): def test_text_table_exit_reason(capsys):
results = pd.DataFrame( results = pd.DataFrame(
{ {
"pair": ["ETH/BTC", "ETH/BTC", "ETH/BTC"], "pair": ["ETH/BTC", "ETH/BTC", "ETH/BTC"],
@ -452,7 +453,8 @@ def test_text_table_exit_reason():
exit_reason_stats = generate_tag_metrics( exit_reason_stats = generate_tag_metrics(
"exit_reason", starting_balance=22, results=results, skip_nan=False "exit_reason", starting_balance=22, results=results, skip_nan=False
) )
text = text_table_tags("exit_tag", exit_reason_stats, "BTC") text_table_tags("exit_tag", exit_reason_stats, "BTC")
text = capsys.readouterr().out
assert re.search( assert re.search(
r".* Exit Reason .* Exits .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* " r".* Exit Reason .* Exits .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* "
@ -460,11 +462,11 @@ def test_text_table_exit_reason():
text, text,
) )
assert re.search( assert re.search(
r".* roi .* 2 .* 15.00 .* 0.60000000 .* 2.73 .* 0:20:00 .* 2 0 0 100 .*", r".* roi .* 2 .* 15.0 .* 0.60000000 .* 2.73 .* 0:20:00 .* 2 0 0 100 .*",
text, text,
) )
assert re.search( assert re.search(
r".* stop_loss .* 1 .* -10.00 .* -0.20000000 .* -0.91 .* 0:10:00 .* 0 0 1 0 .*", r".* stop_loss .* 1 .* -10.0 .* -0.20000000 .* -0.91 .* 0:10:00 .* 0 0 1 0 .*",
text, text,
) )
assert re.search( assert re.search(
@ -507,7 +509,7 @@ def test_generate_sell_reason_stats():
assert stop_result["profit_mean_pct"] == round(stop_result["profit_mean"] * 100, 2) assert stop_result["profit_mean_pct"] == round(stop_result["profit_mean"] * 100, 2)
def test_text_table_strategy(testdatadir): def test_text_table_strategy(testdatadir, capsys):
filename = testdatadir / "backtest_results/backtest-result_multistrat.json" filename = testdatadir / "backtest_results/backtest-result_multistrat.json"
bt_res_data = load_backtest_stats(filename) bt_res_data = load_backtest_stats(filename)
@ -515,8 +517,10 @@ def test_text_table_strategy(testdatadir):
strategy_results = generate_strategy_comparison(bt_stats=bt_res_data["strategy"]) strategy_results = generate_strategy_comparison(bt_stats=bt_res_data["strategy"])
assert strategy_results == bt_res_data_comparison assert strategy_results == bt_res_data_comparison
text = text_table_strategy(strategy_results, "BTC") text_table_strategy(strategy_results, "BTC", "STRATEGY SUMMARY")
captured = capsys.readouterr()
text = captured.out
assert re.search( assert re.search(
r".* Strategy .* Trades .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* " r".* Strategy .* Trades .* Avg Profit % .* Tot Profit BTC .* Tot Profit % .* "
r"Avg Duration .* Win Draw Loss Win% .* Drawdown .*", r"Avg Duration .* Win Draw Loss Win% .* Drawdown .*",
@ -534,12 +538,12 @@ def test_text_table_strategy(testdatadir):
) )
def test_generate_edge_table(): def test_generate_edge_table(capsys):
results = {} results = {}
results["ETH/BTC"] = PairInfo(-0.01, 0.60, 2, 1, 3, 10, 60) results["ETH/BTC"] = PairInfo(-0.01, 0.60, 2, 1, 3, 10, 60)
text = generate_edge_table(results) generate_edge_table(results)
assert text.count("+") == 7 text = capsys.readouterr().out
assert text.count("| ETH/BTC |") == 1 assert re.search(r".* ETH/BTC .*", text)
assert re.search(r".* Risk Reward Ratio .* Required Risk Reward .* Expectancy .*", text) assert re.search(r".* Risk Reward Ratio .* Required Risk Reward .* Expectancy .*", text)

View File

@ -105,9 +105,7 @@ def test_recursive_helper_text_table_recursive_analysis_instances(recursive_conf
instance = RecursiveAnalysis(recursive_conf, strategy_obj) instance = RecursiveAnalysis(recursive_conf, strategy_obj)
instance.dict_recursive = dict_diff instance.dict_recursive = dict_diff
_table, _headers, data = RecursiveAnalysisSubFunctions.text_table_recursive_analysis_instances( data = RecursiveAnalysisSubFunctions.text_table_recursive_analysis_instances([instance])
[instance]
)
# check row contents for a try that has too few signals # check row contents for a try that has too few signals
assert data[0][0] == "rsi" assert data[0][0] == "rsi"
@ -118,9 +116,7 @@ def test_recursive_helper_text_table_recursive_analysis_instances(recursive_conf
dict_diff = dict() dict_diff = dict()
instance = RecursiveAnalysis(recursive_conf, strategy_obj) instance = RecursiveAnalysis(recursive_conf, strategy_obj)
instance.dict_recursive = dict_diff instance.dict_recursive = dict_diff
_table, _headers, data = RecursiveAnalysisSubFunctions.text_table_recursive_analysis_instances( data = RecursiveAnalysisSubFunctions.text_table_recursive_analysis_instances([instance])
[instance]
)
assert len(data) == 0 assert len(data) == 0

View File

@ -27,7 +27,7 @@ from freqtrade.configuration.load_config import (
) )
from freqtrade.constants import DEFAULT_DB_DRYRUN_URL, DEFAULT_DB_PROD_URL, ENV_VAR_PREFIX from freqtrade.constants import DEFAULT_DB_DRYRUN_URL, DEFAULT_DB_PROD_URL, ENV_VAR_PREFIX
from freqtrade.enums import RunMode from freqtrade.enums import RunMode
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import ConfigurationError, OperationalException
from tests.conftest import ( from tests.conftest import (
CURRENT_TEST_STRATEGY, CURRENT_TEST_STRATEGY,
log_has, log_has,
@ -1084,6 +1084,29 @@ def test__validate_consumers(default_conf, caplog) -> None:
assert log_has_re("To receive best performance with external data.*", caplog) assert log_has_re("To receive best performance with external data.*", caplog)
def test__validate_orderflow(default_conf) -> None:
conf = deepcopy(default_conf)
conf["exchange"]["use_public_trades"] = True
with pytest.raises(
ConfigurationError,
match="Orderflow is a required configuration key when using public trades.",
):
validate_config_consistency(conf)
conf.update(
{
"orderflow": {
"scale": 0.5,
"stacked_imbalance_range": 3,
"imbalance_volume": 100,
"imbalance_ratio": 3,
}
}
)
# Should pass.
validate_config_consistency(conf)
def test_load_config_test_comments() -> None: def test_load_config_test_comments() -> None:
""" """
Load config with comments Load config with comments

1
tests/testdata/orderflow/candles.json vendored Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
,timestamp,id,type,side,price,amount,cost,date
0,1675311000092, 1588563957, ,buy, 23438.0, 0.013, 0, 2023-02-02 04:10:00.092000+00:00
1,1675311000211, 1588563958, ,sell, 23437.5, 0.001, 0, 2023-02-02 04:10:00.211000+00:00
2,1675311000335, 1588563959, ,sell , 23437.5, 0.196, 0, 2023-02-02 04:10:00.335000+00:00
3,1675311000769, 1588563960, , sell, 23437.5, 0.046, 0, 2023-02-02 04:10:00.769000+00:00
4,1675311000773, 1588563961, ,buy , 23438.0, 0.127, 0, 2023-02-02 04:10:00.773000+00:00
5,1675311000774, 1588563959, ,sell, 23437.5, 0.001, 0, 2023-02-02 04:10:00.774000+00:00
6,1675311000775, 1588563960, ,sell, 23437.5, 0.001, 0, 2023-02-02 04:10:00.775000+00:00
1 timestamp id type side price amount cost date
2 0 1675311000092 1588563957 buy 23438.0 0.013 0 2023-02-02 04:10:00.092000+00:00
3 1 1675311000211 1588563958 sell 23437.5 0.001 0 2023-02-02 04:10:00.211000+00:00
4 2 1675311000335 1588563959 sell 23437.5 0.196 0 2023-02-02 04:10:00.335000+00:00
5 3 1675311000769 1588563960 sell 23437.5 0.046 0 2023-02-02 04:10:00.769000+00:00
6 4 1675311000773 1588563961 buy 23438.0 0.127 0 2023-02-02 04:10:00.773000+00:00
7 5 1675311000774 1588563959 sell 23437.5 0.001 0 2023-02-02 04:10:00.774000+00:00
8 6 1675311000775 1588563960 sell 23437.5 0.001 0 2023-02-02 04:10:00.775000+00:00