Merge remote-tracking branch 'origin/develop' into pr/Axel-CH/9267

This commit is contained in:
Matthias 2024-01-02 12:11:44 +01:00
commit 7ba9aa9acd
43 changed files with 348 additions and 204 deletions

View File

@ -44,7 +44,6 @@ jobs:
- name: pip cache (linux)
uses: actions/cache@v3
if: runner.os == 'Linux'
with:
path: ~/.cache/pip
key: test-${{ matrix.os }}-${{ matrix.python-version }}-pip
@ -55,7 +54,6 @@ jobs:
cd build_helpers && ./install_ta-lib.sh ${HOME}/dependencies/; cd ..
- name: Installation - *nix
if: runner.os == 'Linux'
run: |
python -m pip install --upgrade pip wheel
export LD_LIBRARY_PATH=${HOME}/dependencies/lib:$LD_LIBRARY_PATH
@ -367,7 +365,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.9"
python-version: "3.11"
- name: Cache_dependencies
uses: actions/cache@v3
@ -378,7 +376,6 @@ jobs:
- name: pip cache (linux)
uses: actions/cache@v3
if: runner.os == 'Linux'
with:
path: ~/.cache/pip
key: test-${{ matrix.os }}-${{ matrix.python-version }}-pip
@ -389,7 +386,6 @@ jobs:
cd build_helpers && ./install_ta-lib.sh ${HOME}/dependencies/; cd ..
- name: Installation - *nix
if: runner.os == 'Linux'
run: |
python -m pip install --upgrade pip wheel
export LD_LIBRARY_PATH=${HOME}/dependencies/lib:$LD_LIBRARY_PATH
@ -402,7 +398,7 @@ jobs:
env:
CI_WEB_PROXY: http://152.67.78.211:13128
run: |
pytest --random-order --cov=freqtrade --cov-config=.coveragerc --longrun
pytest --random-order --longrun --durations 20 -n auto --dist loadscope
# Notify only once - when CI completes (and after deploy) in case it's successfull

View File

@ -16,7 +16,7 @@ repos:
additional_dependencies:
- types-cachetools==5.3.0.7
- types-filelock==3.2.7
- types-requests==2.31.0.10
- types-requests==2.31.0.20231231
- types-tabulate==0.9.0.3
- types-python-dateutil==2.8.19.14
- SQLAlchemy==2.0.23

View File

@ -52,7 +52,7 @@
"train_period_days": 15,
"backtest_period_days": 7,
"live_retrain_hours": 0,
"identifier": "uniqe-id",
"identifier": "unique-id",
"feature_parameters": {
"include_timeframes": [
"3m",

View File

@ -572,7 +572,7 @@ In addition to fiat currencies, a range of crypto currencies is supported.
The valid values are:
```json
"BTC", "ETH", "XRP", "LTC", "BCH", "USDT"
"BTC", "ETH", "XRP", "LTC", "BCH", "BNB"
```
## Using Dry-run mode

View File

@ -127,6 +127,8 @@ Freqtrade will not attempt to change these settings.
## Kraken
Kraken supports [time_in_force](configuration.md#understand-order_time_in_force) with settings "GTC" (good till cancelled), "IOC" (immediate-or-cancel) and "PO" (Post only) settings.
!!! Tip "Stoploss on Exchange"
Kraken supports `stoploss_on_exchange` and can use both stop-loss-market and stop-loss-limit orders. It provides great advantages, so we recommend to benefit from it.
You can use either `"limit"` or `"market"` in the `order_types.stoploss` configuration setting to decide which type to use.

View File

@ -68,7 +68,7 @@ Backtesting mode requires [downloading the necessary data](#downloading-data-to-
This way, you can return to using any model you wish by simply specifying the `identifier`.
!!! Note
Backtesting calls `set_freqai_targets()` one time for each backtest window (where the number of windows is the full backtest timerange divided by the `backtest_period_days` parameter). Doing this means that the targets simulate dry/live behavior without look ahead bias. However, the definition of the features in `feature_engineering_*()` is performed once on the entire backtest timerange. This means that you should be sure that features do look-ahead into the future.
Backtesting calls `set_freqai_targets()` one time for each backtest window (where the number of windows is the full backtest timerange divided by the `backtest_period_days` parameter). Doing this means that the targets simulate dry/live behavior without look ahead bias. However, the definition of the features in `feature_engineering_*()` is performed once on the entire training timerange. This means that you should be sure that features do not look-ahead into the future.
More details about look-ahead bias can be found in [Common Mistakes](strategy-customization.md#common-mistakes-when-developing-strategies).
---

View File

@ -1,6 +1,6 @@
markdown==3.5.1
mkdocs==1.5.3
mkdocs-material==9.5.2
mkdocs-material==9.5.3
mdx_truly_sane_lists==1.3
pymdown-extensions==10.5
pymdown-extensions==10.7
jinja2==3.1.2

View File

@ -489,7 +489,7 @@ The helper function `stoploss_from_absolute()` can be used to convert from an ab
dataframe, _ = self.dp.get_analyzed_dataframe(pair, self.timeframe)
trade_date = timeframe_to_prev_date(self.timeframe, trade.open_date_utc)
candle = dataframe.iloc[-1].squeeze()
sign = 1 if trade.is_short else -1
side = 1 if trade.is_short else -1
return stoploss_from_absolute(current_rate + (side * candle['atr'] * 2),
current_rate, is_short=trade.is_short,
leverage=trade.leverage)
@ -760,22 +760,31 @@ The `position_adjustment_enable` strategy property enables the usage of `adjust_
For performance reasons, it's disabled by default and freqtrade will show a warning message on startup if enabled.
`adjust_trade_position()` can be used to perform additional orders, for example to manage risk with DCA (Dollar Cost Averaging) or to increase or decrease positions.
`max_entry_position_adjustment` property is used to limit the number of additional entries per trade (on top of the first entry order) that the bot can execute. By default, the value is -1 which means the bot have no limit on number of adjustment entries.
The strategy is expected to return a stake_amount (in stake currency) between `min_stake` and `max_stake` if and when an additional entry order should be made (position is increased -> buy order for long trades, sell order for short trades).
If there are not enough funds in the wallet (the return value is above `max_stake`) then the signal will be ignored.
Additional orders also result in additional fees and those orders don't count towards `max_open_trades`.
This callback is **not** called when there is an open order (either buy or sell) waiting for execution.
`adjust_trade_position()` is called very frequently for the duration of a trade, so you must keep your implementation as performant as possible.
Additional entries are ignored once you have reached the maximum amount of extra entries that you have set on `max_entry_position_adjustment`, but the callback is called anyway looking for partial exits.
Position adjustments will always be applied in the direction of the trade, so a positive value will always increase your position (negative values will decrease your position), no matter if it's a long or short trade.
Modifications to leverage are not possible, and the stake-amount returned is assumed to be before applying leverage.
### Increase position
The strategy is expected to return a positive **stake_amount** (in stake currency) between `min_stake` and `max_stake` if and when an additional entry order should be made (position is increased -> buy order for long trades, sell order for short trades).
If there are not enough funds in the wallet (the return value is above `max_stake`) then the signal will be ignored.
`max_entry_position_adjustment` property is used to limit the number of additional entries per trade (on top of the first entry order) that the bot can execute. By default, the value is -1 which means the bot have no limit on number of adjustment entries.
Additional entries are ignored once you have reached the maximum amount of extra entries that you have set on `max_entry_position_adjustment`, but the callback is called anyway looking for partial exits.
### Decrease position
The strategy is expected to return a negative stake_amount (in stake currency) for a partial exit.
Returning the full owned stake at that point (based on the current price) (`-(trade.amount / trade.leverage) * current_exit_rate`) results in a full exit.
Returning a value more than the above (so remaining stake_amount would become negative) will result in the bot ignoring the signal.
!!! Note "About stake size"
Using fixed stake size means it will be the amount used for the first order, just like without position adjustment.
If you wish to buy additional orders with DCA, then make sure to leave enough funds in the wallet for that.

View File

@ -367,6 +367,11 @@ class AwesomeStrategy(IStrategy):
}
```
??? info "Orders that don't fill immediately"
`minimal_roi` will take the `trade.open_date` as reference, which is the time the trade was initialized / the first order for this trade was placed.
This will also hold true for limit orders that don't fill immediately (usually in combination with "off-spot" prices through `custom_entry_price()`), as well as for cases where the initial order is replaced through `adjust_entry_price()`.
The time used will still be from the initial `trade.open_date` (when the initial order was first placed), not from the newly placed order date.
### Stoploss
Setting a stoploss is highly recommended to protect your capital from strong moves against you.

View File

@ -1,5 +1,5 @@
""" Freqtrade bot """
__version__ = '2023.12-dev'
__version__ = '2024.1-dev'
if 'dev' in __version__:
from pathlib import Path

View File

@ -105,7 +105,7 @@ SUPPORTED_FIAT = [
"EUR", "GBP", "HKD", "HUF", "IDR", "ILS", "INR", "JPY",
"KRW", "MXN", "MYR", "NOK", "NZD", "PHP", "PKR", "PLN",
"RUB", "UAH", "SEK", "SGD", "THB", "TRY", "TWD", "ZAR",
"USD", "BTC", "ETH", "XRP", "LTC", "BCH"
"USD", "BTC", "ETH", "XRP", "LTC", "BCH", "BNB"
]
MINIMAL_CONFIG = {

View File

@ -175,36 +175,40 @@ def _get_backtest_files(dirname: Path) -> List[Path]:
return list(reversed(sorted(dirname.glob('backtest-result-*-[0-9][0-9].json'))))
def get_backtest_result(filename: Path) -> List[BacktestHistoryEntryType]:
"""
Get backtest result read from metadata file
"""
def _extract_backtest_result(filename: Path) -> List[BacktestHistoryEntryType]:
metadata = load_backtest_metadata(filename)
return [
{
'filename': filename.stem,
'strategy': s,
'notes': v.get('notes', ''),
'run_id': v['run_id'],
'notes': v.get('notes', ''),
# Backtest "run" time
'backtest_start_time': v['backtest_start_time'],
} for s, v in load_backtest_metadata(filename).items()
# Backtest timerange
'backtest_start_ts': v.get('backtest_start_ts', None),
'backtest_end_ts': v.get('backtest_end_ts', None),
'timeframe': v.get('timeframe', None),
'timeframe_detail': v.get('timeframe_detail', None),
} for s, v in metadata.items()
]
def get_backtest_result(filename: Path) -> List[BacktestHistoryEntryType]:
"""
Get backtest result read from metadata file
"""
return _extract_backtest_result(filename)
def get_backtest_resultlist(dirname: Path) -> List[BacktestHistoryEntryType]:
"""
Get list of backtest results read from metadata files
"""
return [
{
'filename': filename.stem,
'strategy': s,
'run_id': v['run_id'],
'notes': v.get('notes', ''),
'backtest_start_time': v['backtest_start_time'],
}
result
for filename in _get_backtest_files(dirname)
for s, v in load_backtest_metadata(filename).items()
if v
for result in _extract_backtest_result(filename)
]

View File

@ -311,11 +311,13 @@ class DataProvider:
timerange = TimeRange.parse_timerange(None if self._config.get(
'timerange') is None else str(self._config.get('timerange')))
# It is not necessary to add the training candles, as they
# were already added at the beginning of the backtest.
startup_candles = self.get_required_startup(str(timeframe), False)
startup_candles = self.get_required_startup(str(timeframe))
tf_seconds = timeframe_to_seconds(str(timeframe))
timerange.subtract_start(tf_seconds * startup_candles)
logger.info(f"Loading data for {pair} {timeframe} "
f"from {timerange.start_fmt} to {timerange.stop_fmt}")
self.__cached_pairs_backtesting[saved_pair] = load_pair_history(
pair=pair,
timeframe=timeframe,
@ -327,7 +329,7 @@ class DataProvider:
)
return self.__cached_pairs_backtesting[saved_pair].copy()
def get_required_startup(self, timeframe: str, add_train_candles: bool = True) -> int:
def get_required_startup(self, timeframe: str) -> int:
freqai_config = self._config.get('freqai', {})
if not freqai_config.get('enabled', False):
return self._config.get('startup_candle_count', 0)
@ -337,12 +339,11 @@ class DataProvider:
# make sure the startupcandles is at least the set maximum indicator periods
self._config['startup_candle_count'] = max(startup_candles, max(indicator_periods))
tf_seconds = timeframe_to_seconds(timeframe)
train_candles = 0
if add_train_candles:
train_candles = freqai_config['train_period_days'] * 86400 / tf_seconds
train_candles = freqai_config['train_period_days'] * 86400 / tf_seconds
total_candles = int(self._config['startup_candle_count'] + train_candles)
logger.info(f'Increasing startup_candle_count for freqai to {total_candles}')
return total_candles
logger.info(
f'Increasing startup_candle_count for freqai on {timeframe} to {total_candles}')
return total_candles
def get_pair_dataframe(
self,

View File

@ -319,10 +319,11 @@ class Exchange:
"""
pass
def _log_exchange_response(self, endpoint, response) -> None:
def _log_exchange_response(self, endpoint: str, response, *, add_info=None) -> None:
""" Log exchange responses """
if self.log_responses:
logger.info(f"API {endpoint}: {response}")
add_info_str = "" if add_info is None else f" {add_info}: "
logger.info(f"API {endpoint}: {add_info_str}{response}")
def ohlcv_candle_limit(
self, timeframe: str, candle_type: CandleType, since_ms: Optional[int] = None) -> int:
@ -1384,7 +1385,7 @@ class Exchange:
order = self.fetch_stoploss_order(order_id, pair)
except InvalidOrderException:
logger.warning(f"Could not fetch cancelled stoploss order {order_id}.")
order = {'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}}
order = {'id': order_id, 'fee': {}, 'status': 'canceled', 'amount': amount, 'info': {}}
return order
@ -2414,6 +2415,8 @@ class Exchange:
symbol=pair,
since=since
)
self._log_exchange_response('funding_history', funding_history,
add_info=f"pair: {pair}, since: {since}")
return sum(fee['amount'] for fee in funding_history)
except ccxt.DDoSProtection as e:
raise DDosProtection(e) from e

View File

@ -26,6 +26,7 @@ class Kraken(Exchange):
"stoploss_on_exchange": True,
"stop_price_param": "stopPrice",
"stop_price_prop": "stopPrice",
"order_time_in_force": ["GTC", "IOC", "PO"],
"ohlcv_candle_limit": 720,
"ohlcv_has_history": False,
"trades_pagination": "id",
@ -187,6 +188,9 @@ class Kraken(Exchange):
)
if leverage > 1.0:
params['leverage'] = round(leverage)
if time_in_force == 'PO':
params.pop('timeInForce', None)
params['postOnly'] = True
return params
def calculate_funding_fees(

View File

@ -228,7 +228,7 @@ class Okx(Exchange):
f'StoplossOrder not found (pair: {pair} id: {order_id}).')
def get_order_id_conditional(self, order: Dict[str, Any]) -> str:
if order['type'] == 'stop':
if order.get('type', '') == 'stop':
return safe_value_fallback2(order, order, 'id_stop', 'id')
return order['id']

View File

@ -1,9 +1,8 @@
import numpy as np
from joblib import Parallel
from sklearn.base import is_classifier
from sklearn.multioutput import MultiOutputClassifier, _fit_estimator
from sklearn.utils.fixes import delayed
from sklearn.utils.multiclass import check_classification_targets
from sklearn.utils.parallel import Parallel, delayed
from sklearn.utils.validation import has_fit_parameter
from freqtrade.exceptions import OperationalException

View File

@ -1,6 +1,5 @@
from joblib import Parallel
from sklearn.multioutput import MultiOutputRegressor, _fit_estimator
from sklearn.utils.fixes import delayed
from sklearn.utils.parallel import Parallel, delayed
from sklearn.utils.validation import has_fit_parameter

View File

@ -709,6 +709,8 @@ class FreqaiDataKitchen:
pair, tf, strategy, corr_dataframes, base_dataframes, is_corr_pairs)
informative_copy = informative_df.copy()
logger.debug(f"Populating features for {pair} {tf}")
for t in self.freqai_config["feature_parameters"]["indicator_periods_candles"]:
df_features = strategy.feature_engineering_expand_all(
informative_copy.copy(), t, metadata=metadata)
@ -788,6 +790,7 @@ class FreqaiDataKitchen:
if not prediction_dataframe.empty:
dataframe = prediction_dataframe.copy()
base_dataframes[self.config["timeframe"]] = dataframe.copy()
else:
dataframe = base_dataframes[self.config["timeframe"]].copy()

View File

@ -669,20 +669,13 @@ class FreqtradeBot(LoggingMixin):
amount = self.exchange.amount_to_contract_precision(
trade.pair,
abs(float(FtPrecise(stake_amount * trade.leverage) / FtPrecise(current_exit_rate))))
if amount > trade.amount:
# This is currently ineffective as remaining would become < min tradable
# Fixing this would require checking for 0.0 there -
# if we decide that this callback is allowed to "fully exit"
logger.info(
f"Adjusting amount to trade.amount as it is higher. {amount} > {trade.amount}")
amount = trade.amount
if amount == 0.0:
logger.info("Amount to exit is 0.0 due to exchange limits - not exiting.")
return
remaining = (trade.amount - amount) * current_exit_rate
if min_exit_stake and remaining < min_exit_stake:
if min_exit_stake and remaining != 0 and remaining < min_exit_stake:
logger.info(f"Remaining amount of {remaining} would be smaller "
f"than the minimum of {min_exit_stake}.")
return
@ -900,7 +893,7 @@ class FreqtradeBot(LoggingMixin):
# First cancelling stoploss on exchange ...
for oslo in trade.open_sl_orders:
try:
logger.info(f"Canceling stoploss on exchange for {trade} "
logger.info(f"Cancelling stoploss on exchange for {trade} "
f"order: {oslo.order_id}")
co = self.exchange.cancel_stoploss_order_with_result(
oslo.order_id, trade.pair, trade.amount)

View File

@ -145,13 +145,14 @@ class Backtesting:
self.required_startup = max([strat.startup_candle_count for strat in self.strategylist])
self.exchange.validate_required_startup_candles(self.required_startup, self.timeframe)
if self.config.get('freqai', {}).get('enabled', False):
# For FreqAI, increase the required_startup to includes the training data
self.required_startup = self.dataprovider.get_required_startup(self.timeframe)
# Add maximum startup candle count to configuration for informative pairs support
self.config['startup_candle_count'] = self.required_startup
if self.config.get('freqai', {}).get('enabled', False):
# For FreqAI, increase the required_startup to includes the training data
# This value should NOT be written to startup_candle_count
self.required_startup = self.dataprovider.get_required_startup(self.timeframe)
self.trading_mode: TradingMode = config.get('trading_mode', TradingMode.SPOT)
# strategies which define "can_short=True" will fail to load in Spot mode.
self._can_short = self.trading_mode != TradingMode.SPOT
@ -239,7 +240,7 @@ class Backtesting:
pairs=self.pairlists.whitelist,
timeframe=self.timeframe,
timerange=self.timerange,
startup_candles=self.config['startup_candle_count'],
startup_candles=self.required_startup,
fail_without_data=True,
data_format=self.config['dataformat_ohlcv'],
candle_type=self.config.get('candle_type_def', CandleType.SPOT)
@ -530,7 +531,7 @@ class Backtesting:
def _get_adjust_trade_entry_for_candle(
self, trade: LocalTrade, row: Tuple, current_time: datetime
) -> LocalTrade:
current_rate = row[OPEN_IDX]
current_rate: float = row[OPEN_IDX]
current_profit = trade.calc_profit_ratio(current_rate)
min_stake = self.exchange.get_min_pair_stake_amount(trade.pair, current_rate, -0.1)
max_stake = self.exchange.get_max_pair_stake_amount(trade.pair, current_rate)
@ -563,11 +564,8 @@ class Backtesting:
self.precision_mode, trade.contract_size)
if amount == 0.0:
return trade
if amount > trade.amount:
# This is currently ineffective as remaining would become < min tradable
amount = trade.amount
remaining = (trade.amount - amount) * current_rate
if remaining < min_stake:
if min_stake and remaining != 0 and remaining < min_stake:
# Remaining stake is too low to be sold.
return trade
exit_ = ExitCheckTuple(ExitType.PARTIAL_EXIT)

View File

@ -561,6 +561,10 @@ def generate_backtest_stats(btdata: Dict[str, DataFrame],
metadata[strategy] = {
'run_id': content['run_id'],
'backtest_start_time': content['backtest_start_time'],
'timeframe': content['config']['timeframe'],
'timeframe_detail': content['config'].get('timeframe_detail', None),
'backtest_start_ts': int(min_date.timestamp()),
'backtest_end_ts': int(max_date.timestamp()),
}
result['strategy'][strategy] = strat_stats

View File

@ -156,20 +156,20 @@ class Order(ModelBase):
if self.order_id != str(order['id']):
raise DependencyException("Order-id's don't match")
self.status = order.get('status', self.status)
self.symbol = order.get('symbol', self.symbol)
self.order_type = order.get('type', self.order_type)
self.side = order.get('side', self.side)
self.price = order.get('price', self.price)
self.amount = order.get('amount', self.amount)
self.filled = order.get('filled', self.filled)
self.average = order.get('average', self.average)
self.remaining = order.get('remaining', self.remaining)
self.cost = order.get('cost', self.cost)
self.stop_price = order.get('stopPrice', self.stop_price)
if 'timestamp' in order and order['timestamp'] is not None:
self.order_date = datetime.fromtimestamp(order['timestamp'] / 1000, tz=timezone.utc)
self.status = safe_value_fallback(order, 'status', default_value=self.status)
self.symbol = safe_value_fallback(order, 'symbol', default_value=self.symbol)
self.order_type = safe_value_fallback(order, 'type', default_value=self.order_type)
self.side = safe_value_fallback(order, 'side', default_value=self.side)
self.price = safe_value_fallback(order, 'price', default_value=self.price)
self.amount = safe_value_fallback(order, 'amount', default_value=self.amount)
self.filled = safe_value_fallback(order, 'filled', default_value=self.filled)
self.average = safe_value_fallback(order, 'average', default_value=self.average)
self.remaining = safe_value_fallback(order, 'remaining', default_value=self.remaining)
self.cost = safe_value_fallback(order, 'cost', default_value=self.cost)
self.stop_price = safe_value_fallback(order, 'stopPrice', default_value=self.stop_price)
order_date = safe_value_fallback(order, 'timestamp')
if order_date:
self.order_date = datetime.fromtimestamp(order_date / 1000, tz=timezone.utc)
self.ft_is_open = True
if self.status in NON_OPEN_EXCHANGE_STATES:
@ -1626,7 +1626,7 @@ class Trade(ModelBase, LocalTrade):
:return: unsorted query object
"""
query = Trade.get_trades_query(trade_filter, include_orders)
# this sholud remain split. if use_db is False, session is not available and the above will
# this should remain split. if use_db is False, session is not available and the above will
# raise an exception.
return Trade.session.scalars(query)

View File

@ -537,6 +537,10 @@ class BacktestHistoryEntry(BaseModel):
run_id: str
backtest_start_time: int
notes: Optional[str] = ''
backtest_start_ts: Optional[int] = None
backtest_end_ts: Optional[int] = None
timeframe: Optional[str] = None
timeframe_detail: Optional[str] = None
class BacktestMetadataUpdate(BaseModel):

View File

@ -10,7 +10,7 @@ import re
from copy import deepcopy
from dataclasses import dataclass
from datetime import date, datetime, timedelta
from functools import partial
from functools import partial, wraps
from html import escape
from itertools import chain
from math import isnan
@ -44,6 +44,23 @@ logger = logging.getLogger(__name__)
logger.debug('Included module rpc.telegram ...')
def safe_async_db(func: Callable[..., Any]):
"""
Decorator to safely handle sessions when switching async context
:param func: function to decorate
:return: decorated function
"""
@wraps(func)
def wrapper(*args, **kwargs):
""" Decorator logic """
try:
return func(*args, **kwargs)
finally:
Trade.session.remove()
return wrapper
@dataclass
class TimeunitMappings:
header: str
@ -61,6 +78,7 @@ def authorized_only(command_handler: Callable[..., Coroutine[Any, Any, None]]):
:return: decorated function
"""
@wraps(command_handler)
async def wrapper(self, *args, **kwargs):
""" Decorator logic """
update = kwargs.get('update') or args[0]
@ -1150,7 +1168,7 @@ class Telegram(RPCHandler):
try:
loop = asyncio.get_running_loop()
# Workaround to avoid nested loops
await loop.run_in_executor(None, self._rpc._rpc_force_exit, trade_id)
await loop.run_in_executor(None, safe_async_db(self._rpc._rpc_force_exit), trade_id)
except RPCException as e:
await self._send_msg(str(e))
@ -1176,6 +1194,7 @@ class Telegram(RPCHandler):
async def _force_enter_action(self, pair, price: Optional[float], order_side: SignalDirection):
if pair != 'cancel':
try:
@safe_async_db
def _force_enter():
self._rpc._rpc_force_entry(pair, price, order_side=order_side)
loop = asyncio.get_running_loop()

View File

@ -29,7 +29,7 @@ class FreqaiExampleHybridStrategy(IStrategy):
"enabled": true,
"purge_old_models": 2,
"train_period_days": 15,
"identifier": "uniqe-id",
"identifier": "unique-id",
"feature_parameters": {
"include_timeframes": [
"3m",

View File

@ -1,4 +1,4 @@
from typing import Any, Dict, List
from typing import Any, Dict, List, Optional
from typing_extensions import TypedDict
@ -26,3 +26,7 @@ class BacktestHistoryEntryType(BacktestMetadataType):
filename: str
strategy: str
notes: str
backtest_start_ts: Optional[int]
backtest_end_ts: Optional[int]
timeframe: Optional[str]
timeframe_detail: Optional[str]

View File

@ -80,6 +80,7 @@ skip_glob = ["**/.env*", "**/env/*", "**/.venv/*", "**/docs/*", "**/user_data/*"
[tool.pytest.ini_options]
asyncio_mode = "auto"
addopts = "--dist loadscope"
[tool.mypy]
ignore_missing_imports = true

View File

@ -7,24 +7,25 @@
-r docs/requirements-docs.txt
coveralls==3.3.1
ruff==0.1.8
ruff==0.1.9
mypy==1.8.0
pre-commit==3.6.0
pytest==7.4.3
pytest==7.4.4
pytest-asyncio==0.21.1
pytest-cov==4.1.0
pytest-mock==3.12.0
pytest-random-order==1.1.0
pytest-xdist==3.5.0
isort==5.13.2
# For datetime mocking
time-machine==2.13.0
# Convert jupyter notebooks to markdown documents
nbconvert==7.12.0
nbconvert==7.13.1
# mypy types
types-cachetools==5.3.0.7
types-filelock==3.2.7
types-requests==2.31.0.10
types-requests==2.31.0.20231231
types-tabulate==0.9.0.3
types-python-dateutil==2.8.19.14

View File

@ -6,7 +6,7 @@
scikit-learn==1.3.2
joblib==1.3.2
catboost==1.2.2; 'arm' not in platform_machine
lightgbm==4.1.0
xgboost==2.0.2
lightgbm==4.2.0
xgboost==2.0.3
tensorboard==2.15.1
datasieve==0.1.7

View File

@ -2,7 +2,7 @@ numpy==1.26.2
pandas==2.1.4
pandas-ta==0.3.14b
ccxt==4.1.91
ccxt==4.2.2
cryptography==41.0.7
aiohttp==3.9.1
SQLAlchemy==2.0.23
@ -22,7 +22,7 @@ jinja2==3.1.2
tables==3.9.1
joblib==1.3.2
rich==13.7.0
pyarrow==14.0.1; platform_machine != 'armv7l'
pyarrow==14.0.2; platform_machine != 'armv7l'
# find first, C search in arrays
py_find_1st==1.1.6
@ -36,9 +36,9 @@ orjson==3.9.10
sdnotify==0.3.2
# API Server
fastapi==0.105.0
pydantic==2.5.2
uvicorn==0.24.0.post1
fastapi==0.108.0
pydantic==2.5.3
uvicorn==0.25.0
pyjwt==2.8.0
aiofiles==23.2.1
psutil==5.9.7
@ -58,5 +58,5 @@ schedule==1.2.1
websockets==12.0
janus==1.0.0
ast-comments==1.2.0
ast-comments==1.2.1
packaging==23.2

View File

@ -11,6 +11,7 @@ from unittest.mock import MagicMock, Mock, PropertyMock
import numpy as np
import pandas as pd
import pytest
from xdist.scheduler.loadscope import LoadScopeScheduling
from freqtrade import constants
from freqtrade.commands import Arguments
@ -56,6 +57,27 @@ def pytest_configure(config):
setattr(config.option, 'markexpr', 'not longrun')
class FixtureScheduler(LoadScopeScheduling):
# Based on the suggestion in
# https://github.com/pytest-dev/pytest-xdist/issues/18
def _split_scope(self, nodeid):
if 'exchange_online' in nodeid:
try:
# Extract exchange ID from nodeid
exchange_id = nodeid.split('[')[1].split('-')[0].rstrip(']')
return exchange_id
except Exception as e:
print(e)
pass
return nodeid
def pytest_xdist_make_scheduler(config, log):
return FixtureScheduler(config, log)
def log_has(line, logs):
"""Check if line is found on some caplog's message."""
return any(line == message for message in logs.messages)

View File

@ -508,16 +508,13 @@ def test_dp_get_required_startup(default_conf_usdt):
dp = DataProvider(default_conf_usdt, None)
# No FreqAI config
assert dp.get_required_startup('5m', False) == 0
assert dp.get_required_startup('1h', False) == 0
assert dp.get_required_startup('1d', False) == 0
assert dp.get_required_startup('1d', True) == 0
assert dp.get_required_startup('5m') == 0
assert dp.get_required_startup('1h') == 0
assert dp.get_required_startup('1d') == 0
dp._config['startup_candle_count'] = 20
assert dp.get_required_startup('5m', False) == 20
assert dp.get_required_startup('5m', True) == 20
assert dp.get_required_startup('1h', False) == 20
assert dp.get_required_startup('5m') == 20
assert dp.get_required_startup('1h') == 20
assert dp.get_required_startup('1h') == 20
# With freqAI config
@ -532,37 +529,19 @@ def test_dp_get_required_startup(default_conf_usdt):
]
}
}
assert dp.get_required_startup('5m', False) == 20
assert dp.get_required_startup('5m', True) == 5780
assert dp.get_required_startup('1h', False) == 20
assert dp.get_required_startup('1h', True) == 500
assert dp.get_required_startup('1d', False) == 20
assert dp.get_required_startup('1d', True) == 40
assert dp.get_required_startup('5m') == 5780
assert dp.get_required_startup('1h') == 500
assert dp.get_required_startup('1d') == 40
# FreqAI kindof ignores startup_candle_count if it's below indicator_periods_candles
dp._config['startup_candle_count'] = 0
assert dp.get_required_startup('5m', False) == 20
assert dp.get_required_startup('5m', True) == 5780
assert dp.get_required_startup('1h', False) == 20
assert dp.get_required_startup('1h', True) == 500
assert dp.get_required_startup('1d', False) == 20
assert dp.get_required_startup('1d', True) == 40
assert dp.get_required_startup('5m') == 5780
assert dp.get_required_startup('1h') == 500
assert dp.get_required_startup('1d') == 40
dp._config['freqai']['feature_parameters']['indicator_periods_candles'][1] = 50
assert dp.get_required_startup('5m', False) == 50
assert dp.get_required_startup('5m', True) == 5810
assert dp.get_required_startup('1h', False) == 50
assert dp.get_required_startup('1h', True) == 530
assert dp.get_required_startup('1d', False) == 50
assert dp.get_required_startup('1d', True) == 70
assert dp.get_required_startup('5m') == 5810
assert dp.get_required_startup('1h') == 530
assert dp.get_required_startup('1d') == 70
# scenario from issue https://github.com/freqtrade/freqtrade/issues/9432
@ -577,12 +556,6 @@ def test_dp_get_required_startup(default_conf_usdt):
}
}
dp._config['startup_candle_count'] = 40
assert dp.get_required_startup('5m', False) == 40
assert dp.get_required_startup('5m', True) == 51880
assert dp.get_required_startup('1h', False) == 40
assert dp.get_required_startup('1h', True) == 4360
assert dp.get_required_startup('1d', False) == 40
assert dp.get_required_startup('1d', True) == 220
assert dp.get_required_startup('5m') == 51880
assert dp.get_required_startup('1h') == 4360
assert dp.get_required_startup('1d') == 220

View File

@ -3194,7 +3194,7 @@ def test_cancel_stoploss_order_with_result(default_conf, mocker, exchange_name):
mocker.patch(f'{mock_prefix}.fetch_stoploss_order', side_effect=exc)
co = exchange.cancel_stoploss_order_with_result(order_id='_', pair='TKN/BTC', amount=555)
assert co['amount'] == 555
assert co == {'fee': {}, 'status': 'canceled', 'amount': 555, 'info': {}}
assert co == {'id': '_', 'fee': {}, 'status': 'canceled', 'amount': 555, 'info': {}}
with pytest.raises(InvalidOrderException):
exc = InvalidOrderException("Did not find order")

View File

@ -13,11 +13,14 @@ STOPLOSS_ORDERTYPE = 'stop-loss'
STOPLOSS_LIMIT_ORDERTYPE = 'stop-loss-limit'
def test_buy_kraken_trading_agreement(default_conf, mocker):
@pytest.mark.parametrize("order_type,time_in_force,expected_params", [
('limit', 'ioc', {'timeInForce': 'IOC', 'trading_agreement': 'agree'}),
('limit', 'PO', {'postOnly': True, 'trading_agreement': 'agree'}),
('market', None, {'trading_agreement': 'agree'})
])
def test_kraken_trading_agreement(default_conf, mocker, order_type, time_in_force, expected_params):
api_mock = MagicMock()
order_id = f'test_prod_buy_{randint(0, 10 ** 6)}'
order_type = 'limit'
time_in_force = 'ioc'
order_id = f'test_prod_{order_type}_{randint(0, 10 ** 6)}'
api_mock.options = {}
api_mock.create_order = MagicMock(return_value={
'id': order_id,
@ -49,41 +52,9 @@ def test_buy_kraken_trading_agreement(default_conf, mocker):
assert api_mock.create_order.call_args[0][1] == order_type
assert api_mock.create_order.call_args[0][2] == 'buy'
assert api_mock.create_order.call_args[0][3] == 1
assert api_mock.create_order.call_args[0][4] == 200
assert api_mock.create_order.call_args[0][5] == {'timeInForce': 'IOC',
'trading_agreement': 'agree'}
assert api_mock.create_order.call_args[0][4] == (200 if order_type == 'limit' else None)
def test_sell_kraken_trading_agreement(default_conf, mocker):
api_mock = MagicMock()
order_id = f'test_prod_sell_{randint(0, 10 ** 6)}'
order_type = 'market'
api_mock.options = {}
api_mock.create_order = MagicMock(return_value={
'id': order_id,
'symbol': 'ETH/BTC',
'info': {
'foo': 'bar'
}
})
default_conf['dry_run'] = False
mocker.patch(f'{EXMS}.amount_to_precision', lambda s, x, y: y)
mocker.patch(f'{EXMS}.price_to_precision', lambda s, x, y: y)
exchange = get_patched_exchange(mocker, default_conf, api_mock, id="kraken")
order = exchange.create_order(pair='ETH/BTC', ordertype=order_type,
side="sell", amount=1, rate=200, leverage=1.0)
assert 'id' in order
assert 'info' in order
assert order['id'] == order_id
assert api_mock.create_order.call_args[0][0] == 'ETH/BTC'
assert api_mock.create_order.call_args[0][1] == order_type
assert api_mock.create_order.call_args[0][2] == 'sell'
assert api_mock.create_order.call_args[0][3] == 1
assert api_mock.create_order.call_args[0][4] is None
assert api_mock.create_order.call_args[0][5] == {'trading_agreement': 'agree'}
assert api_mock.create_order.call_args[0][5] == expected_params
def test_get_balances_prod(default_conf, mocker):

View File

@ -54,7 +54,7 @@ def freqai_conf(default_conf, tmp_path):
"backtest_period_days": 10,
"live_retrain_hours": 0,
"expiration_hours": 1,
"identifier": "uniqe-id100",
"identifier": "unique-id100",
"live_trained_timestamp": 0,
"data_kitchen_thread_count": 2,
"activate_tensorboard": False,

View File

@ -6,11 +6,17 @@ from unittest.mock import PropertyMock
import pytest
from freqtrade.commands.optimize_commands import setup_optimize_configuration
from freqtrade.configuration.timerange import TimeRange
from freqtrade.data import history
from freqtrade.data.dataprovider import DataProvider
from freqtrade.enums import RunMode
from freqtrade.enums.candletype import CandleType
from freqtrade.exceptions import OperationalException
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from freqtrade.optimize.backtesting import Backtesting
from tests.conftest import (CURRENT_TEST_STRATEGY, get_args, log_has_re, patch_exchange,
patched_configuration_load_config_file)
from tests.conftest import (CURRENT_TEST_STRATEGY, get_args, get_patched_exchange, log_has_re,
patch_exchange, patched_configuration_load_config_file)
from tests.freqai.conftest import get_patched_freqai_strategy
def test_freqai_backtest_start_backtest_list(freqai_conf, mocker, testdatadir, caplog):
@ -40,7 +46,16 @@ def test_freqai_backtest_start_backtest_list(freqai_conf, mocker, testdatadir, c
Backtesting.cleanup()
def test_freqai_backtest_load_data(freqai_conf, mocker, caplog):
@pytest.mark.parametrize(
"timeframe, expected_startup_candle_count",
[
("5m", 876),
("15m", 492),
("1d", 302),
],
)
def test_freqai_backtest_load_data(freqai_conf, mocker, caplog,
timeframe, expected_startup_candle_count):
patch_exchange(mocker)
now = datetime.now(timezone.utc)
@ -48,10 +63,14 @@ def test_freqai_backtest_load_data(freqai_conf, mocker, caplog):
PropertyMock(return_value=['HULUMULU/USDT', 'XRP/USDT']))
mocker.patch('freqtrade.optimize.backtesting.history.load_data')
mocker.patch('freqtrade.optimize.backtesting.history.get_timerange', return_value=(now, now))
freqai_conf['timeframe'] = timeframe
freqai_conf.get('freqai', {}).get('feature_parameters', {}).update({'include_timeframes': []})
backtesting = Backtesting(deepcopy(freqai_conf))
backtesting.load_bt_data()
assert log_has_re('Increasing startup_candle_count for freqai to.*', caplog)
assert log_has_re(f'Increasing startup_candle_count for freqai on {timeframe} '
f'to {expected_startup_candle_count}', caplog)
assert history.load_data.call_args[1]['startup_candles'] == expected_startup_candle_count
Backtesting.cleanup()
@ -85,3 +104,34 @@ def test_freqai_backtest_live_models_model_not_found(freqai_conf, mocker, testda
Backtesting(bt_config)
Backtesting.cleanup()
def test_freqai_backtest_consistent_timerange(mocker, freqai_conf):
mocker.patch('freqtrade.plugins.pairlistmanager.PairListManager.whitelist',
PropertyMock(return_value=['XRP/USDT:USDT']))
gbs = mocker.patch('freqtrade.optimize.backtesting.generate_backtest_stats')
freqai_conf['candle_type_def'] = CandleType.FUTURES
freqai_conf.get('exchange', {}).update({'pair_whitelist': ['XRP/USDT:USDT']})
freqai_conf.get('freqai', {}).get('feature_parameters', {}).update(
{'include_timeframes': ['5m', '1h'], 'include_corr_pairlist': []})
freqai_conf['timerange'] = '20211120-20211121'
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
exchange = get_patched_exchange(mocker, freqai_conf)
strategy.dp = DataProvider(freqai_conf, exchange)
strategy.freqai_info = freqai_conf.get("freqai", {})
freqai = strategy.freqai
freqai.dk = FreqaiDataKitchen(freqai_conf)
timerange = TimeRange.parse_timerange("20211115-20211122")
freqai.dd.load_all_pair_histories(timerange, freqai.dk)
backtesting = Backtesting(deepcopy(freqai_conf))
backtesting.start()
gbs.call_args[1]['min_date'] == datetime(2021, 11, 20, 0, 0, tzinfo=timezone.utc)
gbs.call_args[1]['max_date'] == datetime(2021, 11, 21, 0, 0, tzinfo=timezone.utc)
Backtesting.cleanup()

View File

@ -3,6 +3,7 @@ from datetime import datetime, timedelta, timezone
from pathlib import Path
from unittest.mock import MagicMock
import pandas as pd
import pytest
from freqtrade.configuration import TimeRange
@ -135,3 +136,63 @@ def test_get_full_model_path(mocker, freqai_conf, model):
model_path = freqai.dk.get_full_models_path(freqai_conf)
assert model_path.is_dir() is True
def test_get_pair_data_for_features_with_prealoaded_data(mocker, freqai_conf):
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
exchange = get_patched_exchange(mocker, freqai_conf)
strategy.dp = DataProvider(freqai_conf, exchange)
strategy.freqai_info = freqai_conf.get("freqai", {})
freqai = strategy.freqai
freqai.dk = FreqaiDataKitchen(freqai_conf)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dd.load_all_pair_histories(timerange, freqai.dk)
_, base_df = freqai.dd.get_base_and_corr_dataframes(timerange, "LTC/BTC", freqai.dk)
df = freqai.dk.get_pair_data_for_features("LTC/BTC", "5m", strategy, base_dataframes=base_df)
assert df is base_df["5m"]
assert not df.empty
def test_get_pair_data_for_features_without_preloaded_data(mocker, freqai_conf):
freqai_conf.update({"timerange": "20180115-20180130"})
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
exchange = get_patched_exchange(mocker, freqai_conf)
strategy.dp = DataProvider(freqai_conf, exchange)
strategy.freqai_info = freqai_conf.get("freqai", {})
freqai = strategy.freqai
freqai.dk = FreqaiDataKitchen(freqai_conf)
timerange = TimeRange.parse_timerange("20180110-20180130")
freqai.dd.load_all_pair_histories(timerange, freqai.dk)
base_df = {'5m': pd.DataFrame()}
df = freqai.dk.get_pair_data_for_features("LTC/BTC", "5m", strategy, base_dataframes=base_df)
assert df is not base_df["5m"]
assert not df.empty
assert df.iloc[0]['date'].strftime("%Y-%m-%d %H:%M:%S") == "2018-01-11 23:00:00"
assert df.iloc[-1]['date'].strftime("%Y-%m-%d %H:%M:%S") == "2018-01-30 00:00:00"
def test_populate_features(mocker, freqai_conf):
strategy = get_patched_freqai_strategy(mocker, freqai_conf)
exchange = get_patched_exchange(mocker, freqai_conf)
strategy.dp = DataProvider(freqai_conf, exchange)
strategy.freqai_info = freqai_conf.get("freqai", {})
freqai = strategy.freqai
freqai.dk = FreqaiDataKitchen(freqai_conf)
timerange = TimeRange.parse_timerange("20180115-20180130")
freqai.dd.load_all_pair_histories(timerange, freqai.dk)
corr_df, base_df = freqai.dd.get_base_and_corr_dataframes(timerange, "LTC/BTC", freqai.dk)
mocker.patch.object(strategy, 'feature_engineering_expand_all', return_value=base_df["5m"])
df = freqai.dk.populate_features(base_df["5m"], "LTC/BTC", strategy,
base_dataframes=base_df, corr_dataframes=corr_df)
strategy.feature_engineering_expand_all.assert_called_once()
pd.testing.assert_frame_equal(base_df["5m"],
strategy.feature_engineering_expand_all.call_args[0][0])
assert df.iloc[0]['date'].strftime("%Y-%m-%d %H:%M:%S") == "2018-01-15 00:00:00"

View File

@ -20,8 +20,8 @@ from tests.freqai.conftest import (get_patched_freqai_strategy, is_mac, make_rl_
mock_pytorch_mlp_model_training_parameters)
def is_py11() -> bool:
return sys.version_info >= (3, 11)
def is_py12() -> bool:
return sys.version_info >= (3, 12)
def is_arm() -> bool:
@ -523,8 +523,8 @@ def test_get_state_info(mocker, freqai_conf, dp_exists, caplog, tickers):
if is_mac():
pytest.skip("Reinforcement learning module not available on intel based Mac OS")
if is_py11():
pytest.skip("Reinforcement learning currently not available on python 3.11.")
if is_py12():
pytest.skip("Reinforcement learning currently not available on python 3.12.")
freqai_conf.update({"freqaimodel": "ReinforcementLearner"})
freqai_conf.update({"timerange": "20180110-20180130"})

View File

@ -1604,12 +1604,15 @@ def test_create_stoploss_order_insufficient_funds(
])
@pytest.mark.usefixtures("init_persistence")
def test_handle_stoploss_on_exchange_trailing(
mocker, default_conf_usdt, fee, is_short, bid, ask, limit_order, stop_price, hang_price
mocker, default_conf_usdt, fee, is_short, bid, ask, limit_order, stop_price, hang_price,
time_machine,
) -> None:
# When trailing stoploss is set
enter_order = limit_order[entry_side(is_short)]
exit_order = limit_order[exit_side(is_short)]
stoploss = MagicMock(return_value={'id': 13434334, 'status': 'open'})
stoploss = MagicMock(return_value={'id': '13434334', 'status': 'open'})
start_dt = dt_now()
time_machine.move_to(start_dt, tick=False)
patch_RPCManager(mocker)
mocker.patch.multiple(
EXMS,
@ -1683,6 +1686,8 @@ def test_handle_stoploss_on_exchange_trailing(
assert freqtrade.handle_trade(trade) is False
assert freqtrade.handle_stoploss_on_exchange(trade) is False
assert trade.stoploss_order_id == '13434334'
# price jumped 2x
mocker.patch(
f'{EXMS}.fetch_ticker',
@ -1704,16 +1709,15 @@ def test_handle_stoploss_on_exchange_trailing(
cancel_order_mock.assert_not_called()
stoploss_order_mock.assert_not_called()
# Move time by 10s ... so stoploss order should be replaced.
time_machine.move_to(start_dt + timedelta(minutes=10), tick=False)
assert freqtrade.handle_trade(trade) is False
assert trade.stop_loss == stop_price[1]
trade.stoploss_order_id = '100'
# setting stoploss_on_exchange_interval to 0 seconds
freqtrade.strategy.order_types['stoploss_on_exchange_interval'] = 0
assert freqtrade.handle_stoploss_on_exchange(trade) is False
cancel_order_mock.assert_called_once_with('100', 'ETH/USDT')
cancel_order_mock.assert_called_once_with('13434334', 'ETH/USDT')
stoploss_order_mock.assert_called_once_with(
amount=30,
pair='ETH/USDT',

View File

@ -650,28 +650,42 @@ def test_dca_exiting(default_conf_usdt, ticker_usdt, fee, mocker, caplog, levera
caplog.clear()
# Sell more than what we got (we got ~20 coins left)
# First adjusts the amount to 20 - then rejects.
# Doesn't exit, as the amount is too high.
freqtrade.strategy.adjust_trade_position = MagicMock(return_value=-50)
freqtrade.process()
assert log_has_re("Adjusting amount to trade.amount as it is higher.*", caplog)
assert log_has_re("Remaining amount of 0.0 would be smaller than the minimum of 10.", caplog)
trade = Trade.get_trades().first()
assert len(trade.orders) == 2
# Amount too low...
freqtrade.strategy.adjust_trade_position = MagicMock(return_value=-(trade.stake_amount * 0.99))
freqtrade.process()
trade = Trade.get_trades().first()
assert len(trade.orders) == 2
# Amount exactly comes out as exactly 0
freqtrade.strategy.adjust_trade_position = MagicMock(
return_value=-(trade.amount / trade.leverage * 2.02))
freqtrade.process()
trade = Trade.get_trades().first()
assert len(trade.orders) == 3
assert trade.orders[-1].ft_order_side == 'sell'
assert pytest.approx(trade.stake_amount) == 40.198
assert trade.is_open
assert trade.is_open is False
# use amount that would trunc to 0.0 once selling
mocker.patch(f"{EXMS}.amount_to_contract_precision", lambda s, p, v: round(v, 1))
freqtrade.strategy.adjust_trade_position = MagicMock(return_value=-0.01)
freqtrade.process()
trade = Trade.get_trades().first()
assert len(trade.orders) == 2
assert len(trade.orders) == 3
assert trade.orders[-1].ft_order_side == 'sell'
assert pytest.approx(trade.stake_amount) == 40.198
assert trade.is_open
assert trade.is_open is False
assert log_has_re('Amount to exit is 0.0 due to exchange limits - not exiting.', caplog)
expected_profit = starting_amount - 40.1980 + trade.realized_profit
expected_profit = starting_amount - 60 + trade.realized_profit
assert pytest.approx(freqtrade.wallets.get_free('USDT')) == expected_profit
if spot:
assert pytest.approx(freqtrade.wallets.get_total('USDT')) == expected_profit