# Adaptive Trend Following Trading Strategy based on Renko

Today I’m going to show how to create an algorithmic trading strategy on Python. This strategy uses my original research from one previous article. This current article consists of these parts:

- Concept
- Algorithm description
- Trading strategy development
- Backtesting and analyzing the result
- Further problems discussion
- Conclusions

# Concept

Financial time-series have a high level of noise in data. Would be good to have an ability to reduce a noise. In this article it is proposed to use Renko brick size optimization. The key idea of the approach is to quantify the quality of a Renko chart and try to get an optimal brick size for using in a trading. If you are not familiar with the Renko charts will be better follow the link of the article.

The optimization of quality over time is called an *“adaptivity”* in this context.

# Algorithm description

This trading strategy is a typical trend following, but the following based on the last Renko direction, not the price of moving average. The basic steps are:

- If the Renko chart is empty then build the chart using brick size optimization. Adaptivity implies that volatility level could be important for optimizing of the process. In this example, the optimal brick size is inside of
*IQR*of absolute price changes (e.g. daily) for the last*N*days. Also, you can choose any range for optimization (based on ATR indicator, fixed, or percentage of the last price).

2. If the Renko chart is not empty then get the last market price and add to the Renko. If the new brick is not built then pass to the next iteration. Otherwise, if the new brick is following the current direction then part of current position should be covered (*0 – 100%*). If the new brick is built in the opposite direction then the current position should be closed, it means that trend has been changed. The Renko chart should be empty.

3. Repeat these steps while the price data continues to feed.

# Trading strategy development

I will use Catalyst framework for developing the trading strategy. How to install the framework and a few examples you can find on the website.

Catalyst is an algorithmic trading library for crypto-assets written in Python. It allows trading strategies to be easily expressed and backtested against historical data (with daily and minute resolution), providing analytics and insights regarding a particular strategy’s performance.

Basically, Catalyst script consists of a few parts: *initialize*, *handle_data*, *analyze*, and *run_algorithm* execution. Let’s code the algorithm.

First of all, required libraries should be specified, *pyrenko* module you can find on github.

Some information from tutorial:

Every

`catalyst`

algorithm consists of at least two functions you have to define:

`initialize(context)`

`handle_data(context, data)`

Before the start of the algorithm,

`catalyst`

calls the`initialize()`

function and passes in a`context`

variable.`context`

is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.After the algorithm has been initialized,

`catalyst`

calls the`handle_data()`

function on each iteration, that’s one per day (daily) or once every minute (minute), depending on the frequency we choose to run our simulation. On every iteration,`handle_data()`

passes the same`context`

variable and an event-frame called`data`

containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each crypto asset in your universe.

Our *initialize* function looks like this:

We work with *ETH/BTC* crypto pair. The basic timeframe is hourly (*60T*). The Renko chart uses *15* (*15 * 24 *hours)days of data. We cover *16.6%* of the position amount after each new brick in the current direction. Commission are similar to commission on Bitfinex exchange. Also, we use the slippage value, it looks more how it will be going in real mode.

The general logic of algorithm is in *handle_data* function:

Firstly, we check if the model is empty we should get the data, calculate IQR, optimize the brick size, build the Renko chart, and open the order. Otherwise, we get the last price and put it to Renko chart. Then check how much new brick size built and what is the direction of the last brick. Each block of the code contains a comment, this helps you to match code and algorithm.

Additional information has been passed using *record* function. This information used in *analyze* function that runs after algorithm execution. In this function, we can draw some graphs, calculate the performance of the algorithm, and etc. Variable *perf* contains basic information of the performance, also this variable contains information that we added using *record* function.

The last part of the script is *run_algorithm* that contains a period of the backtesting, capital, cryptocurrency, and names of the functions that we described above.

In this example, we work with daily data feeding (*data_frequency* parameter).

# Backtesting and analyzing the result

Let’s run our script in Catalyst environment by command:

We get something like this:

Basic metrics you can find in the output of the terminal window, these metrics we output in *analyze* function. Total return of algorithm is *252.55%* with *-18.74%* maximum drawdown. This is not bad for almost 1 year. You can use Sortino ratio for comparing different algorithms, I considered this metric in this article. Beta is very close to 0 and Alpha is positive, it means that our algorithm is market-neutral and we beat the benchmark. If you are not familiar with this metrics I recommend you this article.

The blue line on the first graph is an equity of the algorithm, the red line is an equity of the benchmark (*ETH/BTC* asset). The second graph contains the price of *ETH/BTC* (grey color) and Renko price (yellow color). Brick size is shown on the third graph (blue color), vertical red lines are time moments when the Renko chart was rebuilt. The number of created Renko bricks is shown on the fourth graph, the position amount is shown on the fifth plot. The last graph contains a drawdown.

Let’s get an additional information based on our result. The further analysis uses *perf* variable in *csv-format*. I use *pyfolio* library for this purpose.

It is a Python library for performance and risk analysis of financial portfolios developed by Quantopian Inc. It works well with the Zipline open source backtesting library.

First of all, draw the returns of algorithm and compare the equity with benchmark:

Summary statistics contains basic metrics, some of them we got in the output of *analyze* function. This variant is more extended, metrics such as *Daily value at risk* or *Annual volatility* could be very useful in strategy evaluating.

The next graphs describe drawdown of our strategy:

Drawdown is one of the key metric to estimate a reliable of the strategy, also the achievement of critic level of a drawdown could be a trigger to re-optimize the strategy.

These graphs describe our returns by different angles: distribution of monthly returns and box-plots of returns for different timeframes (daily, weekly, monthly):

Let’s look at volatility of algorithm as a monthly moving average:

Decreasing of Sharpe ratio (e.g. negative) also could be a trigger for re-optimizing process in the lifetime pipeline:

# Further problems discussion

Creating a reliable algorithmic trading strategy is a difficult process that includes different steps. The general trading idea is necessary, but not sufficient condition. I suggest to think about these problems to get stable and reliable strategy:

- Attempt to use minute data resolution to take into consideration data that we get intraday. Now the algorithm uses daily resolution only, it means that we lose data and price movements.
- Change market orders to limit orders. This will allow to reduce commissions, because taker
*rebate*, this is kind of a reward. - Carry out a lot of experiments with different assets to create a reliable portfolio of assets and tune a money-management between them.
- Develop and follow the re-optimization — forwarding rule to get the moment when we should change some parameters of the model (length of history, cover ratio, timeframe, and etc.). This rule includes frequency of optimization, time periods for optimization and forwarding (or walk-forwarding) processes, minimal requirements of metrics to accept the algorithm as working.
- Develop or choose the execution framework to run the algorithm in the production mode. Even if you can get a reliable trading strategy that approved on tons backtests you can fail in a real mode, because you will get a lot of errors or imperfections in an infrastructure (inside or outside of your ecosystem). For example, you can use Catalyst in backtesting mode, but you can’t use it in production for this algorithm, because Catalyst doesn’t support trading on margin account now.

# Conclusions

- Created the algorithmic trading strategy based on theoretical research. This algorithm tries to adapt to a volatility level, reduce a noise, and follow the trend.
- The algorithm has a positive result. Demonstrated the different metrics and graphs of performance.
- Suggested an advice on how to improve this research.
- Source code you can get on github (catalyst script and ipython-notebook for advanced analytics).

Example of the optimization of this strategy.

Best regards,