split example notebooks

This commit is contained in:
Jonathan Raviotta 2019-08-11 12:52:37 -04:00
parent 161db08745
commit 2cffc3228a
4 changed files with 514 additions and 160 deletions

9
.gitignore vendored
View File

@ -6,7 +6,10 @@ config*.json
.hyperopt
logfile.txt
hyperopt_trials.pickle
user_data/
user_data/*
!user_data/notebooks
user_data/notebooks/*
!user_data/notebooks/*example.ipynb
freqtrade-plot.html
freqtrade-profit-plot.html
@ -80,7 +83,7 @@ docs/_build/
target/
# Jupyter Notebook
.ipynb_checkpoints
*.ipynb_checkpoints
# pyenv
.python-version
@ -94,4 +97,4 @@ target/
.mypy_cache/
#exceptions
!user_data/noteboks/*example.ipynb
!*.gitkeep

View File

@ -1,30 +1,96 @@
# Analyzing bot data
# Analyzing bot data with Jupyter notebooks
You can analyze the results of backtests and trading history easily using Jupyter notebooks. A sample notebook is located at `user_data/notebooks/analysis_example.ipynb`. For usage instructions, see [jupyter.org](https://jupyter.org/documentation).
You can analyze the results of backtests and trading history easily using Jupyter notebooks. Sample notebooks are located at `user_data/notebooks/`.
*Pro tip - Don't forget to start a jupyter notbook server from within your conda or venv environment or use [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels)*
## Pro tips
## Example snippets
* See [jupyter.org](https://jupyter.org/documentation) for usage instructions.
* Don't forget to start a jupyter notbook server from within your conda or venv environment or use [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels)*
* Copy the example notebook so your changes don't get clobbered with the next freqtrade update.
## Fine print
Some tasks don't work especially well in notebooks. For example, anything using asyncronous exectution is a problem for Jupyter. Also, freqtrade's primary entry point is the shell cli, so using pure python in a notebook bypasses arguments that provide required parameters to functions.
## Recommended workflow
| Task | Tool |
--- | ---
Bot operations | CLI
Repetative tasks | shell scripts
Data analysis & visualization | Notebook
1. Use the CLI to
* download historical data
* run a backtest
* run with real-time data
* export results
1. Collect these actions in shell scripts
* save complicated commands with arguments
* execute mult-step operations
* automate testing strategies and prepareing data for analysis
1. Use a notebook to
* import data
* munge and plot to generate insights
## Example utility snippets for Jupyter notebooks
### Change directory to root
Jupyter notebooks execute from the notebook directory. The following snippet searches for the project root, so relative paths remain consistant.
```python
# Change directory
# Modify this cell to insure that the output shows the correct path.
# Define all paths relative to the project root shown in the cell output
import os
from pathlib import Path
project_root = "somedir/freqtrade"
i=0
try:
os.chdirdir(project_root)
assert Path('LICENSE').is_file()
except:
while i<4 and (not Path('LICENSE').is_file()):
os.chdir(Path(Path.cwd(), '../'))
i+=1
project_root = Path.cwd()
print(Path.cwd())
```
### Watch project for changes to code
This scans the project for changes to code before Jupyter runs cells.
```python
# Reloads local code changes
%load_ext autoreload
%autoreload 2
```
## Load existing objects into a Jupyter notebook
These examples assume that you have already generated data using the cli. These examples will allow you to drill deeper into your results, and perform analysis which otherwise would make the output very difficult to digest due to information overload.
### Load backtest results into a pandas dataframe
```python
from freqtrade.data.btanalysis import load_backtest_data
# Load backtest results
from freqtrade.data.btanalysis import load_backtest_data
df = load_backtest_data("user_data/backtest_data/backtest-result.json")
# Show value-counts per pair
df.groupby("pair")["sell_reason"].value_counts()
```
This will allow you to drill deeper into your backtest results, and perform analysis which otherwise would make the regular backtest-output very difficult to digest due to information overload.
### Load live trading results into a pandas dataframe
``` python
from freqtrade.data.btanalysis import load_trades_from_db
# Fetch trades from database
from freqtrade.data.btanalysis import load_trades_from_db
df = load_trades_from_db("sqlite:///tradesv3.sqlite")
# Display results
@ -33,42 +99,61 @@ df.groupby("pair")["sell_reason"].value_counts()
### Load multiple configuration files
This option can be usefull to inspect the results of passing in multiple configs in case of problems
This option can be useful to inspect the results of passing in multiple configs
``` python
# Load config from multiple files
from freqtrade.configuration import Configuration
config = Configuration.from_files(["config1.json", "config2.json"])
print(config)
# Show the config in memory
import json
print(json.dumps(config, indent=1))
```
## Strategy debugging example
### Load exchange data to a pandas dataframe
This loads candle data to a dataframe
```python
# Load data using values passed to function
from pathlib import Path
from freqtrade.data.history import load_pair_history
ticker_interval = "5m"
data_location = Path('user_data', 'data', 'bitrex')
pair = "BTC_USDT"
candles = load_pair_history(datadir=data_location,
ticker_interval=ticker_interval,
pair=pair)
# Confirm success
print("Loaded " + str(len(candles)) + f" rows of data for {pair} from {data_location}")
display(candles.head())
```
## Strategy debugging example
Debugging a strategy can be time-consuming. FreqTrade offers helper functions to visualize raw data.
### Import requirements and define variables used in analyses
### Define variables used in analyses
You can override strategy settings as demonstrated below.
```python
# Imports
from pathlib import Path
import os
from freqtrade.data.history import load_pair_history
from freqtrade.resolvers import StrategyResolver
# You can override strategy settings as demonstrated below.
# Customize these according to your needs.
# Define some constants
ticker_interval = "5m"
# Name of the strategy class
strategy_name = 'AwesomeStrategy'
strategy_name = 'TestStrategy'
# Path to user data
user_data_dir = 'user_data'
# Location of the strategy
strategy_location = Path(user_data_dir, 'strategies')
# Location of the data
data_location = Path(user_data_dir, 'data', 'binance')
# Pair to analyze
# Only use one pair here
# Pair to analyze - Only use one pair here
pair = "BTC_USDT"
```
@ -76,12 +161,16 @@ pair = "BTC_USDT"
```python
# Load data using values set above
bt_data = load_pair_history(datadir=Path(data_location),
from pathlib import Path
from freqtrade.data.history import load_pair_history
candles = load_pair_history(datadir=data_location,
ticker_interval=ticker_interval,
pair=pair)
# Confirm success
print(f"Loaded {len(bt_data)} rows of data for {pair} from {data_location}")
print("Loaded " + str(len(candles)) + f" rows of data for {pair} from {data_location}")
display(candles.head())
```
### Load and run strategy
@ -90,31 +179,27 @@ print(f"Loaded {len(bt_data)} rows of data for {pair} from {data_location}")
```python
# Load strategy using values set above
from freqtrade.resolvers import StrategyResolver
strategy = StrategyResolver({'strategy': strategy_name,
'user_data_dir': user_data_dir,
'strategy_path': strategy_location}).strategy
# Generate buy/sell signals using strategy
df = strategy.analyze_ticker(bt_data, {'pair': pair})
df = strategy.analyze_ticker(candles, {'pair': pair})
```
### Display the trade details
* Note that using `data.head()` would also work, however most indicators have some "startup" data at the top of the dataframe.
#### Some possible problems
* Columns with NaN values at the end of the dataframe
* Columns used in `crossed*()` functions with completely different units
#### Comparison with full backtest
having 200 buy signals as output for one pair from `analyze_ticker()` does not necessarily mean that 200 trades will be made during backtesting.
Assuming you use only one condition such as, `df['rsi'] < 30` as buy condition, this will generate multiple "buy" signals for each pair in sequence (until rsi returns > 29).
The bot will only buy on the first of these signals (and also only if a trade-slot ("max_open_trades") is still available), or on one of the middle signals, as soon as a "slot" becomes available.
* Some possible problems
* Columns with NaN values at the end of the dataframe
* Columns used in `crossed*()` functions with completely different units
* Comparison with full backtest
* having 200 buy signals as output for one pair from `analyze_ticker()` does not necessarily mean that 200 trades will be made during backtesting.
* Assuming you use only one condition such as, `df['rsi'] < 30` as buy condition, this will generate multiple "buy" signals for each pair in sequence (until rsi returns > 29). The bot will only buy on the first of these signals (and also only if a trade-slot ("max_open_trades") is still available), or on one of the middle signals, as soon as a "slot" becomes available.
```python
# Report results
print(f"Generated {df['buy'].sum()} buy signals")
data = df.set_index('date', drop=True)

View File

@ -0,0 +1,289 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Analyzing bot data with Jupyter notebooks \n",
"\n",
"You can analyze the results of backtests and trading history easily using Jupyter notebooks. A sample notebook is located at `user_data/notebooks/analysis_example.ipynb`. \n",
"\n",
"## Pro tips \n",
"\n",
"* See [jupyter.org](https://jupyter.org/documentation) for usage instructions.\n",
"* Don't forget to start a jupyter notbook server from within your conda or venv environment or use [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels)*\n",
"* Copy the example notebook so your changes don't get clobbered with the next freqtrade update.\n",
"\n",
"## Fine print \n",
"\n",
"Some tasks don't work especially well in notebooks. For example, anything using asyncronous exectution is a problem for Jupyter. Also, freqtrade's primary entry point is the shell cli, so using pure python in a notebook bypasses arguments that provide required parameters to functions.\n",
"\n",
"## Recommended workflow \n",
"\n",
"| Task | Tool | \n",
" --- | --- \n",
"Bot operations | CLI \n",
"Repetative tasks | shell scripts\n",
"Data analysis & visualization | Notebook \n",
"\n",
"1. Use the CLI to\n",
" * download historical data\n",
" * run a backtest\n",
" * run with real-time data\n",
" * export results \n",
"\n",
"1. Collect these actions in shell scripts\n",
" * save complicated commands with arguments\n",
" * execute mult-step operations \n",
" * automate testing strategies and prepareing data for analysis\n",
"\n",
"1. Use a notebook to\n",
" * import data\n",
" * munge and plot to generate insights"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example utility snippets for Jupyter notebooks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Change directory to root \n",
"\n",
"Jupyter notebooks execute from the notebook directory. The following snippet searches for the project root, so relative paths remain consistant."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Change directory\n",
"# Modify this cell to insure that the output shows the correct path.\n",
"# Define all paths relative to the project root shown in the cell output\n",
"import os\n",
"from pathlib import Path\n",
"\n",
"project_root = \"somedir/freqtrade\"\n",
"i=0\n",
"try:\n",
" os.chdirdir(project_root)\n",
" assert Path('LICENSE').is_file()\n",
"except:\n",
" while i<4 and (not Path('LICENSE').is_file()):\n",
" os.chdir(Path(Path.cwd(), '../'))\n",
" i+=1\n",
" project_root = Path.cwd()\n",
"print(Path.cwd())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Watch project for changes to code\n",
"This scans the project for changes to code before Jupyter runs cells."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reloads local code changes\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load existing objects into a Jupyter notebook\n",
"\n",
"These examples assume that you have already generated data using the cli. These examples will allow you to drill deeper into your results, and perform analysis which otherwise would make the output very difficult to digest due to information overload."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load backtest results into a pandas dataframe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load backtest results\n",
"from freqtrade.data.btanalysis import load_backtest_data\n",
"df = load_backtest_data(\"user_data/backtest_data/backtest-result.json\")\n",
"\n",
"# Show value-counts per pair\n",
"df.groupby(\"pair\")[\"sell_reason\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load live trading results into a pandas dataframe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fetch trades from database\n",
"from freqtrade.data.btanalysis import load_trades_from_db\n",
"df = load_trades_from_db(\"sqlite:///tradesv3.sqlite\")\n",
"\n",
"# Display results\n",
"df.groupby(\"pair\")[\"sell_reason\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load multiple configuration files\n",
"This option can be useful to inspect the results of passing in multiple configs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load config from multiple files\n",
"from freqtrade.configuration import Configuration\n",
"config = Configuration.from_files([\"config1.json\", \"config2.json\"])\n",
"\n",
"# Show the config in memory\n",
"import json\n",
"print(json.dumps(config, indent=1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load exchange data to a pandas dataframe\n",
"\n",
"This loads candle data to a dataframe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Load data using values passed to function\n",
"from pathlib import Path\n",
"from freqtrade.data.history import load_pair_history\n",
"\n",
"ticker_interval = \"5m\"\n",
"data_location = Path('user_data', 'data', 'bitrex')\n",
"pair = \"BTC_USDT\"\n",
"candles = load_pair_history(datadir=data_location,\n",
" ticker_interval=ticker_interval,\n",
" pair=pair)\n",
"\n",
"# Confirm success\n",
"print(\"Loaded \" + str(len(candles)) + f\" rows of data for {pair} from {data_location}\")\n",
"display(candles.head())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Feel free to submit an issue or Pull Request enhancing this document if you would like to share ideas on how to best analyze the data."
]
}
],
"metadata": {
"file_extension": ".py",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"mimetype": "text/x-python",
"name": "python",
"npconvert_exporter": "python",
"pygments_lexer": "ipython3",
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
},
"version": 3
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,96 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Analyzing bot data\n",
"\n",
"You can analyze the results of backtests and trading history easily using Jupyter notebooks. \n",
"**Copy this file so your changes don't get clobbered with the next freqtrade update!** \n",
"For usage instructions, see [jupyter.org](https://jupyter.org/documentation). \n",
"*Pro tip - Don't forget to start a jupyter notbook server from within your conda or venv environment or use [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels)*\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Imports\n",
"from pathlib import Path\n",
"import os\n",
"from freqtrade.data.history import load_pair_history\n",
"from freqtrade.resolvers import StrategyResolver\n",
"from freqtrade.data.btanalysis import load_backtest_data\n",
"from freqtrade.data.btanalysis import load_trades_from_db"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Change directory\n",
"# Define all paths relative to the project root shown in the cell output\n",
"try:\n",
" os.chdir(Path(Path.cwd(), '../..'))\n",
" print(Path.cwd())\n",
"except:\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example snippets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load backtest results into a pandas dataframe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load backtest results\n",
"df = load_backtest_data(\"user_data/backtest_data/backtest-result.json\")\n",
"\n",
"# Show value-counts per pair\n",
"df.groupby(\"pair\")[\"sell_reason\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load live trading results into a pandas dataframe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fetch trades from database\n",
"df = load_trades_from_db(\"sqlite:///tradesv3.sqlite\")\n",
"\n",
"# Display results\n",
"df.groupby(\"pair\")[\"sell_reason\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -104,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import requirements and define variables used in analyses"
"## Setup"
]
},
{
@ -113,28 +22,51 @@
"metadata": {},
"outputs": [],
"source": [
"# Change directory\n",
"# Modify this cell to insure that the output shows the correct path.\n",
"# Define all paths relative to the project root shown in the cell output\n",
"import os\n",
"from pathlib import Path\n",
"\n",
"project_root = \"somedir/freqtrade\"\n",
"i=0\n",
"try:\n",
" os.chdirdir(project_root)\n",
" assert Path('LICENSE').is_file()\n",
"except:\n",
" while i<4 and (not Path('LICENSE').is_file()):\n",
" os.chdir(Path(Path.cwd(), '../'))\n",
" i+=1\n",
" project_root = Path.cwd()\n",
"print(Path.cwd())\n",
"\n",
"# Reloads local code changes\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Customize these according to your needs.\n",
"\n",
"# Define some constants\n",
"ticker_interval = \"5m\"\n",
"# Name of the strategy class\n",
"strategy_name = 'AwesomeStrategy'\n",
"strategy_name = 'TestStrategy'\n",
"# Path to user data\n",
"user_data_dir = 'user_data'\n",
"# Location of the strategy\n",
"strategy_location = Path(user_data_dir, 'strategies')\n",
"# Location of the data\n",
"data_location = Path(user_data_dir, 'data', 'binance')\n",
"# Pair to analyze \n",
"# Only use one pair here\n",
"# Pair to analyze - Only use one pair here\n",
"pair = \"BTC_USDT\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load exchange data"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -142,35 +74,43 @@
"outputs": [],
"source": [
"# Load data using values set above\n",
"bt_data = load_pair_history(datadir=Path(data_location),\n",
"from pathlib import Path\n",
"from freqtrade.data.history import load_pair_history\n",
"\n",
"candles = load_pair_history(datadir=data_location,\n",
" ticker_interval=ticker_interval,\n",
" pair=pair)\n",
"\n",
"# Confirm success\n",
"print(\"Loaded \" + str(len(bt_data)) + f\" rows of data for {pair} from {data_location}\")"
"print(\"Loaded \" + str(len(candles)) + f\" rows of data for {pair} from {data_location}\")\n",
"display(candles.head())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load and run strategy\n",
"## Load and run strategy\n",
"* Rerun each time the strategy file is changed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Load strategy using values set above\n",
"from freqtrade.resolvers import StrategyResolver\n",
"strategy = StrategyResolver({'strategy': strategy_name,\n",
" 'user_data_dir': user_data_dir,\n",
" 'strategy_path': strategy_location}).strategy\n",
"\n",
"# Generate buy/sell signals using strategy\n",
"df = strategy.analyze_ticker(bt_data, {'pair': pair})"
"df = strategy.analyze_ticker(candles, {'pair': pair})\n",
"df.tail()"
]
},
{
@ -178,19 +118,14 @@
"metadata": {},
"source": [
"### Display the trade details\n",
"\n",
"* Note that using `data.head()` would also work, however most indicators have some \"startup\" data at the top of the dataframe.\n",
"\n",
"#### Some possible problems\n",
"\n",
"* Columns with NaN values at the end of the dataframe\n",
"* Columns used in `crossed*()` functions with completely different units\n",
"\n",
"#### Comparison with full backtest\n",
"\n",
"having 200 buy signals as output for one pair from `analyze_ticker()` does not necessarily mean that 200 trades will be made during backtesting.\n",
"\n",
"Assuming you use only one condition such as, `df['rsi'] < 30` as buy condition, this will generate multiple \"buy\" signals for each pair in sequence (until rsi returns > 29).\n",
"The bot will only buy on the first of these signals (and also only if a trade-slot (\"max_open_trades\") is still available), or on one of the middle signals, as soon as a \"slot\" becomes available.\n"
"* Some possible problems\n",
" * Columns with NaN values at the end of the dataframe\n",
" * Columns used in `crossed*()` functions with completely different units\n",
"* Comparison with full backtest\n",
" * having 200 buy signals as output for one pair from `analyze_ticker()` does not necessarily mean that 200 trades will be made during backtesting.\n",
" * Assuming you use only one condition such as, `df['rsi'] < 30` as buy condition, this will generate multiple \"buy\" signals for each pair in sequence (until rsi returns > 29). The bot will only buy on the first of these signals (and also only if a trade-slot (\"max_open_trades\") is still available), or on one of the middle signals, as soon as a \"slot\" becomes available. \n"
]
},
{
@ -236,6 +171,48 @@
"name": "python",
"npconvert_exporter": "python",
"pygments_lexer": "ipython3",
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
},
"version": 3
},
"nbformat": 4,