The incentives program is a core feature of Osmosis. Everyday, approximately 547,945.20 Osmos are distributed among Liquidity Providers (LPers) Stakers, Developers, and the Osmosis Community Pool. Forty-five percent of the 547,945.20 Osmos, about 246,575.34 Osmos go to LPers. Of those 246,575.34 Osmos, 70% are sent to the Community Pool. Working with the Osmosis Grants Program, the OGP, and ChaosLabs, Hathor Nodes has analyzed the current incentives program and implemented a new process to calculate LP incentives. This document will walk through the analysis done by Hathor Nodes and outline recommendations regarding a variety of topics. A separate summary of the final results can be found here.
Ultimately, the community can discuss and decide which recommendations to take and if there’s any further research they’d like to see done. For the sake of accessibility, this document is written with a broad audience in mind. That being said, if you read this report and have some questions, please feel free to ask me!
For the sake of reproducibility, queries, formulas, and example code will be provided along the way. Unless shown otherwise, queries are made using Flipside Crypto’s SDK and data. You can read and learn about their data here. In future phases of this work, Hathor Nodes will work w/ Chaos Labs to make live versions of key data sets publicly available.
As the numbers are constantly changing, values referenced in the document may not match the code outputs.
This section contains code used to generate results and queries used in the research paper. Non-technical readers can disregard this section.
import json
import math
from datetime import datetime
from typing import Any, Collection, Sequence, Mapping, Text
import numpy as np
import pandas as pd
import plotly
import plotly.express as px
import plotly.graph_objects as go
import requests
import scipy
from google.protobuf.json_format import MessageToDict
from plotly.subplots import make_subplots
from shroomdk import ShroomDK
from aws_client import get_secret_value
# Set up SDK, node rest API, and formatting options.
sdk = ShroomDK(get_secret_value('shroomdk_api_key')['shroomdk_api_key'])
OSMO_LCD_ENDPOINT = 'https://osmosis-mainnet-rpc.allthatnode.com:1317'
pd.options.display.float_format = '{:,}'.format
plotly.graph_objects.Layout(font={'family': 'Roboto'})
# Helper functions
def get_incentivized_pools(
lcd_endpoint: Text = OSMO_LCD_ENDPOINT) -> Collection[int]:
"""Get list of pools w/ internal incentives."""
incentivized_gauges = requests.get(
f'{lcd_endpoint}/osmosis/pool-incentives/v1beta1/incentivized_pools',
timeout=60).json()['incentivized_pools']
return {int(gauge['pool_id']) for gauge in incentivized_gauges}
incentivized_pools = get_incentivized_pools()
To optimize LP incentives, let us first review the purpose LPs have and the purpose of providing incentives. Liquidity Pools offer end-users the ability to exchange tokens at a predefined rate. At the time of writing Osmosis only incentivizes Balancer Pools. Through 2023, it is expected that Stableswap and Concentrated Liquidity LPs will be incentivized. To account for that, the Incentives Program will be designed such that its criteria and metrics are extendable to different LP types.
The primary factor on User Experience, UX, when using an exchange is the slippage they pay to make trades. Users wish to maximize the value of their trades, so they prefer exchanges w/ lower slippage & fees. Minimizing slippage requires deep liquidity in Balancer LPs. Liquidity comes from LPers who believe the swap fees and incentives they receive will provide a positive ROI compared to alternative investments.
LPers are compensated in the form of swap fees. Swap fees are predefined at each LPs creation, though they can be modified afterwards. The vast majority of incentivized LPs have swap fees set at 0.2% of the traded token.
If LPers already receive swap fees, why do they need additional incentives? The issue with relying on just swap fees is two-fold. First, when an LP is first created, it will have fairly low liquidity. This in turns causes high slippage for most users which discourages the LPs usage. Without volume, swap fees are low, which discourages providing liquidity. Second, even if an LP has sufficient initial liquidity to draw volume, there isn’t a one to one correlation between volume and required liquidity. This is because slippage is based on the size of trades relative to the liquidity in LPs.
As an example, imagine a USDC/OSMO LP in two different scenarios. In scenario A, a single swap occurs where a user trades 1,000,000 USDC for their Osmo equivalent. In scenario B, a million swaps occur over the course of a day. In each swap, users exchange 1 USDC for its Osmo equivalent. For scenario A’s LP to provide the same slippage to users as scenario B’s LP, it would significantly higher liquidity. Both LPs have the same volume and thus swap fees generated, but they have different liquidity needs.
This is where incentives come in. Incentives allow the protocol to draw liquidity to LPs whose swap fees are not high enough to sustain required liquidity levels. Incentives come at a cost though.
There are two limitations w/ using Osmo to incentivize the LP program.
One, Osmo incentives come from an inflationary process. Every day, new Osmos are minted and distributed. Excessive incentive emission can reduce the purchasing power of an Osmo. This would discourage investors from providing liquidity and increase the required Osmo emissions necessary to maintain liquidity levels.
Two, the Osmo is a fixed supply coin. Eventually, there will be no more newly minted Osmos and further incentives would have to come from alternative sources. Whether that’s external incentives from other protocols, or from the protocol’s community pool.
Thus the best course of action is to minimize the number of Osmos given to LPers. This allows us to frame the incentives program as an optimization problem, with an analytical solution.
We can now frame the incentives program as an optimization problem. Typically, optimization problems have a single objective function, which is a number we wish to either maximize or minimize. Then, constraints are defined. Constraints can be thought of as requirements that we must adhere to. In our case the number of Osmos given to LPers is our objective function, and its constraints can be defined by liquidity levels required for each individual LP to meet slippage requirements.
For each LP, minimize $ Incentives(LP) $, which is the number of Osmos given to LPers in a given LP.
Constraints can be formulated as the statement:
For a swap of size i, the user will experience no more than S slippage.
This statement allows us to define slippage expectations at varying swap sizes. It also allows us to tie slippage back to liquidity, as a key factor of slippage in a trade is the relative size of the trade to the LP’s liquidity.
With a high level framework for the problem, we can investigate the varying pieces of the puzzle until a solution is found. In the following sections of this paper, we will dive into concrete numbers for these constraints, related tokenomic concerns, the question of which LPs to incentivize, and an example solution for this problem – i.e. a set of target liquidity numbers for Osmosis LPs.
Recall our generic constraint statement: For a swap of size i, the user will experience no more than S slippage. To set constraints, we must identify a reasonable slippage level for users. As swaps occur in a variety of sizes, multiple constraints may be needed to accommodate retail and whale traders.
Before we can assess slippage expectations, we need to flesh out the relationship between swap size, liquidity, and slippage. Using Balancer Pools as an example, we will demonstrate this relationship. Slippage in a Balancer pool can be estimated w/ the following equation:
Let S = Slippage in a trade.
Let x = Number of tokens in the LP.
Let d = Number of tokens being traded.
Note that x and d are the same token. I.e. when trading ATOMs for OSMOs in LP #1, we look at the number of ATOMs being swapped and the number of ATOMs in the LP.
$$ S = 1 - (\frac{x}{x+d})^2 $$This formula does not account for swap fees. Swap fees are typically taken out prior to a trade being executed, so we can account them as so:
Let f = Swap fees paid for the transaction.
$$ S = \frac{1 - (\frac{x}{x+d})^2}{1 - f} $$This equation is defines slippage as a function of liquidity, swap fees, and swap size. For our use case, we'd like an equation that defines (healthy) liquidity based on a given slippage, fees, and swap size. Let's rearrange the equation:
$$ S = \frac{1 - (\frac{x}{x+d})^2}{1 - f} $$$$ (1-f)S = 1 - (\frac{x}{x + d})^2 $$$$ (1-f)S - 1 = -(\frac{x}{x + d})^2 $$$$ 1 - (1-f)S = (\frac{x}{x + d})^2 $$$$ \sqrt{1 - (1 - f)S} = \frac{x}{x + d} $$$$ x = (x+d)\sqrt{1 - (1 - f)S} $$$$ x = x\sqrt{1 - (1 - f)S} + d\sqrt{1 - (1 - f)S} $$$$ x - x\sqrt{1 - (1 - f)S} = d\sqrt{1 - (1 - f)S} $$$$ x(1 -\sqrt{1 - (1 - f)S}) = d\sqrt{1 - (1 - f)S} $$$$ x = d \frac{\sqrt{1 - (1-f)S}}{1 - \sqrt{1 - (1-f)S}} $$We can simplify this equation a bit defining a swap multiplier function, $ M(f, S) $, which contains the constant by d.
$$ M(f, S) = \frac{\sqrt{1 - (1-f)S}}{1 - \sqrt{1 - (1-f)S}} $$$$ x = M(f, S) * d $$To note that x and d are token specific values we can write them as $ x_t $ and $ d_t $.
Now we have the liquidity for a required for a single token, x, based on the LP's swap fees, f, target slippage, S, and swap size, d. To convert this into a target TVL for a single LP, we need to incorporate two more elements, the token prices, and the LP's token weights.
Let $p_t$ be the price of a token, t. Let $w_{t, L}$ be the weight of a token, t, in an LP, L.
The target TVL, in USD, of an LP is the largest target TVL for each of its tokens:
$$ TVL_L(f, S) = \max{\frac{x_t * p_t}{w_{t, L}}} \forall t \in L $$Since we are looking at swap sizes and know the relationship between $ x_t $ and $ d_t $, we can change this to:
$$ TVL_L(f, S) = \max{\frac{M(f, S) * d_t * p_t}{w_{t, L}}} \forall t \in L $$We multiply by $ p_t $ to convert the token counts to USD. Instead of comparing counts of two tokens, we can compare their USD value and take the max.
Dividing by $ w_{t, L} $ allows us to calculate the TVL of the LP that maps back to the token's required TVL. Note that for LPs like the Ion/Osmo LP, 20:80 weights, this causes higher TVL levels than a 50:50 LP. If you need 500k USD of Ion in that LP, then the entire LP must have a TVL of 2500k USD. In a 50:50 LP, you need 1000k USD. Similarly, an LP w/ four tokens of even 25:25:25:25 weights would tend to require higher TVL levels than a 50:50 two token LP. This is something to keep in mind when deciding to incentivize LPs w/ non 50:50 weights.
With an equation at hand, let's create a helper function calculating the swap multipliers and then set forth w/ identifying target TVL levels.
def calc_balancer_swap_multiplier(swap_fee: float, slippage: float) -> float:
"""Calculate the swap multiplier required to achieve target slippage.
Formula is specifically for a balancer pool."""
return ((math.sqrt(1 - ((1 - swap_fee) * slippage))
/ (1 - math.sqrt(1 - ((1 - swap_fee) * slippage))))
/ (1 - swap_fee))
The vast majority of users trade on the Osmosis Frontend. The frontend allows users to input swaps and creates a transaction for them to sign. An example swap is shown below:
The user selects a token to trade and an exact amount of that token to trade. In this case, our user wishes to trade a million Osmos. They also select the token they wish to receive in return, Atoms in our example. The App creates a swap transaction by identifying a route, i.e. which LPs to use, and a minimum # of Atoms the user will receive in return. The exact number of Atoms our user will receive can vary depending on the ratio of Osmos/Atoms at the time of execution, but the transaction will fail if the user cannot receive the specified minimum amount.
The important thing to note here is that the App displays and accounts for 1% slippage. The swap fees in this LP are 0.2%, but the transaction is set up to cap user slippage at 1%. Technically, you can still face larger than 1% slippage, but that is noted to the user as the price impact.
Recommendation: As User Experience is key to the success of any app, Hathor Nodes recommends ensuring the vast majority of users experience no more than 1% slippage to align with the front end. For users swapping significantly large numbers of tokens, i.e. whales, a higher slippage expectation can be set.
Pros:
Cons:
To address the concerns above, let us take a look at swaps made by users.
Expanding further on large swaps, let’s query the USD value of the largest swap made in each LP in the last ninety days.
query = """
WITH last_recorded_price AS (
SELECT
CURRENCY,
SYMBOL,
MAX(recorded_hour) AS recorded_hour
FROM
osmosis.core.ez_prices
GROUP BY
CURRENCY,
SYMBOL)
SELECT
CAST(POOL_IDS[0] AS INT) AS pool_id,
ROUND(MAX(price * FROM_AMOUNT / POW(10, FROM_DECIMAL)), 2) AS max_swap_usd
FROM
osmosis.core.fact_swaps swaps
INNER JOIN
last_recorded_price
ON swaps.from_currency = last_recorded_price.currency
INNER JOIN
osmosis.core.ez_prices prices USING(symbol, recorded_hour)
WHERE
BLOCK_TIMESTAMP >= DATEADD(day, -90, CAST(GETDATE() AS DATE))
GROUP BY
POOL_IDS[0]
ORDER BY
max_swap_usd DESC,
pool_id
LIMIT
10;
"""
max_swaps = pd.DataFrame(sdk.query(query).records).set_index('pool_id')
max_swaps
max_swap_usd | |
---|---|
pool_id | |
1 | 1,099,199.2 |
808 | 580,410.06 |
704 | 408,419.38 |
678 | 377,930.58 |
712 | 199,785.15 |
9 | 180,992.11 |
674 | 129,164.2 |
803 | 81,881.13 |
497 | 73,449.9 |
498 | 50,982.4 |
Given that most LPs have a swap fee of 0.2%, and we are aiming for a Slippage of 1%, lets see what healthy TVL levels would look like if we wanted to accommodate the max_swap sizes:
print(
f'The swap multiplier is approximately '
f'{round(calc_balancer_swap_multiplier(0.002, 0.01))}.')
The swap multiplier is approximately 199.
Our swap multiplier is roughly equal to 199.00. For the example set of LPs queried above, the weights can be assumed to be 0.5 in most cases. Also, our query has already selected the max value for us. Plugging in these values and simplifying a bit, we get:
$$ TVL_L(f, S) = \max{\frac{M(f, S) * d_t * p_t}{w_{t, L}}} \forall t \in L $$$$ TVL_L(0.002, 0.01) = \frac{199.0 * d_t * p_t}{0.5} = 398 * d_t * p_t $$Where $ d_t * p_t $ is the value found in max_swap_usd.
Looking at a set of LPs w/ 50:50 weights, lets compare target and current TVLs if we attempted to provide 1% Slippage to all LPs.
def get_current_tvl(pool_id: int) -> float:
liquidity = requests.get(
f'https://api-osmosis.imperator.co/pools/v2/{pool_id}',
timeout=60).json()[0]['liquidity']
return round(liquidity, 2)
max_swaps['target_tvl'] = round(398 * max_swaps.max_swap_usd, 2)
max_swaps['current_tvl'] = max_swaps.index.map(
lambda pool_id: get_current_tvl(pool_id))
max_swaps
max_swap_usd | target_tvl | current_tvl | |
---|---|---|---|
pool_id | |||
1 | 1,099,199.2 | 437,481,281.6 | 67,207,421.69 |
808 | 580,410.06 | 231,003,203.88 | 521.68 |
704 | 408,419.38 | 162,550,913.24 | 11,491,213.02 |
678 | 377,930.58 | 150,416,370.84 | 25,345,740.67 |
712 | 199,785.15 | 79,514,489.7 | 12,452,091.19 |
9 | 180,992.11 | 72,034,859.78 | 3,363,235.22 |
674 | 129,164.2 | 51,407,351.6 | 4,903,556.51 |
803 | 81,881.13 | 32,588,689.74 | 12,337,473.69 |
497 | 73,449.9 | 29,233,060.2 | 3,440,666.66 |
498 | 50,982.4 | 20,290,995.2 | 1,596,823.4 |
Note that these TVL levels are far above current TVL levels. In some cases by over 10 tenfold. Realistically, we can't afford to provide 1% slippage to this level of swaps. So, a different slippage constraint is needed for whales.
To create a separate liquidity constraint for whales, we must first be able to define what a whale is in the context of a given LP/token combination. We can think of whales as outliers among the dataset of swaps in an LP. The question is, can we clearly define outliers? Let us take a look at an example of swap distributions within an LP.
This example will take the 100,000 largest non-arb swaps in LP #1, in the last 90 days. Arb swaps are excluded because they tend to be small in size and numerous, further skewing our swap distribution.
def get_pool_swaps(pool_id: int) -> pd.DataFrame:
"""Distribution of tokens swapped in a pool."""
pool_info = requests.get(
f'https://api-osmosis.imperator.co/pools/v2/{pool_id}').json()
pool_denoms = tuple([token['denom'] for token in pool_info])
sdk = ShroomDK(get_secret_value('shroomdk_api_key')['shroomdk_api_key'])
query = f"""
SELECT
DATE(BLOCK_TIMESTAMP) AS swap_date,
labels.PROJECT_NAME AS token,
FROM_AMOUNT / POW(10, FROM_DECIMAL) AS amount
FROM
osmosis.core.FACT_SWAPS swaps
INNER JOIN osmosis.core.DIM_LABELS labels ON
swaps.FROM_CURRENCY = labels.ADDRESS
WHERE
FROM_CURRENCY <> TO_CURRENCY
AND FROM_CURRENCY != 'uosmo'
AND POOL_IDS[0] = '{pool_id}'
AND FROM_CURRENCY IN {pool_denoms}
AND DATE(BLOCK_TIMESTAMP) >= DATEADD(day, -90, GETDATE());
"""
return pd.DataFrame(sdk.query(query).records)
example_swaps = get_pool_swaps(1)
example_swaps
swap_date | token | amount | |
---|---|---|---|
0 | 2023-01-21 | ATOM | 62.149999 |
1 | 2023-01-21 | ATOM | 8.0 |
2 | 2023-01-21 | ATOM | 41.579999 |
3 | 2023-01-21 | ATOM | 80.0 |
4 | 2023-01-21 | ATOM | 12.859999 |
... | ... | ... | ... |
99995 | 2022-11-28 | ATOM | 100.0 |
99996 | 2022-11-28 | ATOM | 6.045972 |
99997 | 2022-11-28 | ATOM | 0.91 |
99998 | 2022-11-28 | ATOM | 0.038958 |
99999 | 2022-11-28 | ATOM | 0.144191 |
100000 rows × 3 columns
fig = px.histogram(
example_swaps, x='amount', color='token', width=900, height=600,
title=f'Distribution of Swaps in LP #1',
template='simple_white', labels={'token': 'Token'})
fig.update_layout(font_family="Robot", font_size=16, xaxis_title='Swap Amount',
yaxis_title='Number of Swaps')
fig.update_yaxes(separatethousands=True)
fig.update_xaxes(range=[0, 1000])
fig
For context, at the time of writing, the largest ATOM swap in LP #1 was for 89,500 ATOMs, far past the view on our graph!
The shape of its distribution, the plot we just viewed, indicates that swap sizes may follow an exponential distribution. This matches our business intuition, we would expect the vast majority of swaps to be relatively small w/ a few swaps that are far larger than the average. There are a few ways to work with exponentially distributed data. One technique is to attempt to normalize the data. There are a few ways of doing that, but for the sake of simplicity, we will take the natural logarithm of each swap. This is not a fool proof method, some skew may remain in the distribution, but it should suffice.
example_swaps['log_swaps'] = example_swaps.amount.apply(lambda a: math.log(a))
example_swaps
swap_date | token | amount | log_swaps | |
---|---|---|---|---|
0 | 2023-01-21 | ATOM | 62.149999 | 4.129550801866616 |
1 | 2023-01-21 | ATOM | 8.0 | 2.0794415416798357 |
2 | 2023-01-21 | ATOM | 41.579999 | 3.7276192583798426 |
3 | 2023-01-21 | ATOM | 80.0 | 4.382026634673881 |
4 | 2023-01-21 | ATOM | 12.859999 | 2.5541216410489724 |
... | ... | ... | ... | ... |
99995 | 2022-11-28 | ATOM | 100.0 | 4.605170185988092 |
99996 | 2022-11-28 | ATOM | 6.045972 | 1.7993922651854442 |
99997 | 2022-11-28 | ATOM | 0.91 | -0.09431067947124129 |
99998 | 2022-11-28 | ATOM | 0.038958 | -3.2452711362277324 |
99999 | 2022-11-28 | ATOM | 0.144191 | -1.9366164693939207 |
100000 rows × 4 columns
fig = px.histogram(example_swaps, x='log_swaps', color='token', width=900,
height=600,
title=f'Distribution of Swaps in LP #1',
template='simple_white', labels={'token': 'Token'})
fig.update_layout(font_family="Robot", font_size=16, xaxis_title='Swap Amount',
yaxis_title='Number of Swaps')
Still a notable skew, but far closer to a normal distribution than before. Now we can identify outliers based on statistical analysis of the dataset. There are two ways of doing this. The first way is to calculate the upper fence of the distribution. See the below box and whisker plot:
fig = px.box(example_swaps, x='log_swaps', color='token', width=900,
height=600,
title=f'Distribution of Swaps in LP #1',
template='simple_white', labels={'token': 'Token'})
fig.update_layout(font_family='Robot', font_size=16,
xaxis_title='Log Swap Amount', yaxis_title='')
The upper fence is the right whisker, vertical blue line, in the graph. The formula for the upper fence is $ Q3 + 1.5 * (Q3 - Q1) $ where q1 and q3 represent the first and third quartile respectively. In percentiles, those would be the 25th and 75th percentile of swaps.
log_swap_stats = example_swaps.log_swaps.describe()
q1 = log_swap_stats['25%']
q3 = log_swap_stats['75%']
upper_fence = q3 + 1.5 * (q3 - q1)
print(f'The upper fence is a swap of {round(math.exp(upper_fence), 2)} Atoms')
avg_whale_swap = example_swaps.loc[
example_swaps.log_swaps >= upper_fence].amount.mean()
print(f'The average whale swap is about {round(avg_whale_swap, 2)} Atoms.')
example_swaps.loc[example_swaps.log_swaps >= upper_fence]
The upper fence is a swap of 13554.27 Atoms The average whale swap is about 23603.22 Atoms.
swap_date | token | amount | log_swaps | |
---|---|---|---|---|
8202 | 2023-01-06 | ATOM | 17,000.0 | 9.740968623038354 |
8962 | 2023-01-06 | ATOM | 17,000.0 | 9.740968623038354 |
9126 | 2023-01-06 | ATOM | 24,922.6255 | 10.123531324494573 |
32207 | 2022-10-28 | ATOM | 20,220.0 | 9.914427492574463 |
32448 | 2022-10-28 | ATOM | 15,400.0 | 9.642122788401721 |
32733 | 2022-10-28 | ATOM | 15,500.0 | 9.648595302907339 |
33245 | 2022-10-28 | ATOM | 16,494.0 | 9.7107519573933 |
35034 | 2022-10-28 | ATOM | 16,300.0 | 9.698920386794853 |
37656 | 2022-10-28 | ATOM | 20,926.5 | 9.948771577376272 |
37899 | 2022-10-28 | ATOM | 31,570.0 | 10.359962581552038 |
39216 | 2022-10-28 | ATOM | 34,700.0 | 10.45449496593495 |
40611 | 2022-10-28 | ATOM | 20,008.0 | 9.903887472557455 |
40712 | 2022-10-28 | ATOM | 19,586.0 | 9.88257030428074 |
42182 | 2022-10-28 | ATOM | 15,182.093389 | 9.627871945855384 |
42891 | 2022-10-28 | ATOM | 19,246.629489 | 9.865091232905108 |
43569 | 2022-10-28 | ATOM | 41,930.0 | 10.643756840164809 |
43789 | 2022-10-28 | ATOM | 15,167.0 | 9.626877294051397 |
44041 | 2022-10-28 | ATOM | 20,190.0 | 9.912942711306883 |
54510 | 2022-10-28 | ATOM | 62,000.0 | 11.03488966402723 |
54629 | 2022-10-28 | ATOM | 14,993.09543 | 9.615345069444967 |
55091 | 2022-10-28 | ATOM | 20,177.0 | 9.912298620814678 |
93758 | 2022-11-26 | ATOM | 40,758.0 | 10.615407418419357 |
An average whale swap of 17645.72 Atoms seems reasonable. A couple of swaps are significantly larger than this. We can't reasonably provide 1% slippage for those swaps, not in a balancer pool at least.
Another way of identifying outliers is to look at the mean and standard deviation of the dataset. Any swaps more than 3 standard deviations above the mean can be marked as an outlier.
mean_std_cutoff = log_swap_stats['mean'] + 3 * log_swap_stats['std']
print(
f'Three stds above the mean is a swap of '
f'{round(math.exp(mean_std_cutoff), 2)} Atoms')
avg_whale_swap = example_swaps.loc[
example_swaps.log_swaps >= mean_std_cutoff].amount.mean()
print(f'The average whale swap is about {round(avg_whale_swap, 2)} Atoms.')
example_swaps.loc[example_swaps.log_swaps >= mean_std_cutoff]
Three stds above the mean is a swap of 12742.32 Atoms The average whale swap is about 22340.05 Atoms.
swap_date | token | amount | log_swaps | |
---|---|---|---|---|
8202 | 2023-01-06 | ATOM | 17,000.0 | 9.740968623038354 |
8962 | 2023-01-06 | ATOM | 17,000.0 | 9.740968623038354 |
9126 | 2023-01-06 | ATOM | 24,922.6255 | 10.123531324494573 |
32207 | 2022-10-28 | ATOM | 20,220.0 | 9.914427492574463 |
32448 | 2022-10-28 | ATOM | 15,400.0 | 9.642122788401721 |
32733 | 2022-10-28 | ATOM | 15,500.0 | 9.648595302907339 |
33245 | 2022-10-28 | ATOM | 16,494.0 | 9.7107519573933 |
35034 | 2022-10-28 | ATOM | 16,300.0 | 9.698920386794853 |
35698 | 2022-10-28 | ATOM | 12,950.8 | 9.468912841281398 |
37656 | 2022-10-28 | ATOM | 20,926.5 | 9.948771577376272 |
37899 | 2022-10-28 | ATOM | 31,570.0 | 10.359962581552038 |
39216 | 2022-10-28 | ATOM | 34,700.0 | 10.45449496593495 |
40611 | 2022-10-28 | ATOM | 20,008.0 | 9.903887472557455 |
40712 | 2022-10-28 | ATOM | 19,586.0 | 9.88257030428074 |
42182 | 2022-10-28 | ATOM | 15,182.093389 | 9.627871945855384 |
42891 | 2022-10-28 | ATOM | 19,246.629489 | 9.865091232905108 |
43559 | 2022-10-28 | ATOM | 12,841.40653 | 9.460430114079465 |
43569 | 2022-10-28 | ATOM | 41,930.0 | 10.643756840164809 |
43789 | 2022-10-28 | ATOM | 15,167.0 | 9.626877294051397 |
44041 | 2022-10-28 | ATOM | 20,190.0 | 9.912942711306883 |
54510 | 2022-10-28 | ATOM | 62,000.0 | 11.03488966402723 |
54629 | 2022-10-28 | ATOM | 14,993.09543 | 9.615345069444967 |
54967 | 2022-10-28 | ATOM | 13,438.0 | 9.505841793480096 |
55091 | 2022-10-28 | ATOM | 20,177.0 | 9.912298620814678 |
93758 | 2022-11-26 | ATOM | 40,758.0 | 10.615407418419357 |
The cutoffs are different, but it didn't filter any of our swaps. As our data isn't necessarily normal, it can still contain a slight skew, we can simply take the min of the two cutoffs.
In some LPs, there may not be any outliers. In that case, we'll use the max.
Lets look at the target tvl levels using these metrics for 1% and 10% max slippage:
query = f"""
WITH last_recorded_price AS (
SELECT
CURRENCY,
SYMBOL,
MAX(recorded_hour) AS recorded_hour
FROM
osmosis.core.ez_prices
GROUP BY
CURRENCY,
SYMBOL),
percentile_swaps AS (
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
LN(FROM_AMOUNT / POW(10, FROM_DECIMAL)) AS swap_amount,
NTILE(100) OVER (PARTITION BY POOL_IDS[0], FROM_CURRENCY
ORDER BY
FROM_AMOUNT ASC) AS percentile
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND CAST(POOL_IDS[0] AS INT) IN {tuple(incentivized_pools)}
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE())),
pool_stats AS (
SELECT
pool_id,
denom,
MAX(IFF(percentile = 25, swap_amount, NULL)) AS swap_amount_p25,
MAX(IFF(percentile = 75, swap_amount, NULL)) AS swap_amount_p75,
AVG(swap_amount) AS mean_ln_amount,
STDDEV(swap_amount) AS std_ln_amount,
MAX(swap_amount) AS max_ln_amount
FROM
percentile_swaps
GROUP BY
pool_id,
denom),
targets AS (
SELECT
pool_id,
denom,
EXP(swap_amount_p75 + 1.5 * (swap_amount_p75 - swap_amount_p25))
AS iqr_target,
EXP(mean_ln_amount + (3 * std_ln_amount)) AS std_target,
EXP(max_ln_amount) AS max_target
FROM
pool_stats),
cutoffs AS (
SELECT
pool_id,
denom,
CASE
WHEN iqr_target <= std_target THEN IFF(max_target < iqr_target,
max_target,
iqr_target)
ELSE IFF(max_target < std_target,
max_target,
std_target)
END AS cutoff
FROM
targets),
outlier_swaps AS (
SELECT
pool_id,
denom,
price * FROM_AMOUNT / POW(10, FROM_DECIMAL) AS swap_amount_usd
FROM
osmosis.core.FACT_SWAPS swaps
INNER JOIN
cutoffs ON
swaps.FROM_CURRENCY = cutoffs.denom
INNER JOIN last_recorded_price ON cutoffs.denom = last_recorded_price.CURRENCY
INNER JOIN osmosis.core.ez_prices prices USING (symbol, recorded_hour)
WHERE
TX_SUCCEEDED
AND FROM_CURRENCY <> TO_CURRENCY
AND CAST(POOL_IDS[0] AS INT) IN {tuple(incentivized_pools)}
AND FROM_AMOUNT / POW(10, FROM_DECIMAL) >= cutoff - 1
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE())),
avg_whale_swap AS (
SELECT
pool_id,
denom,
AVG(swap_amount_usd) AS swap_amount_usd
FROM
outlier_swaps
GROUP BY
pool_id,
denom)
SELECT
pool_id,
ROUND(398 * MAX(swap_amount_usd), 2) AS target_tvl_one_percent_slippage,
ROUND(39.8 * MAX(swap_amount_usd), 2) AS target_tvl_ten_percent_slippage
FROM
avg_whale_swap
GROUP BY
pool_id
ORDER BY
target_tvl_ten_percent_slippage DESC;
"""
whale_swaps = pd.DataFrame(sdk.query(query).records).set_index('pool_id')
whale_swaps.head(10)
target_tvl_one_percent_slippage | target_tvl_ten_percent_slippage | |
---|---|---|
pool_id | ||
678 | 161,556,735.76 | 16,155,673.58 |
1 | 112,995,081.69 | 11,299,508.17 |
9 | 72,034,858.39 | 7,203,485.84 |
704 | 53,433,574.31 | 5,343,357.43 |
712 | 53,433,574.31 | 5,343,357.43 |
674 | 51,407,351.32 | 5,140,735.13 |
584 | 24,411,191.08 | 2,441,119.11 |
833 | 19,366,315.38 | 1,936,631.54 |
3 | 19,274,637.49 | 1,927,463.75 |
812 | 13,147,414.74 | 1,314,741.47 |
The 1% slippage numbers aren't easily achievable at the moment, but the numbers at 10% slippage are. While 10% slippage seems high, whales swaps often cause spikes in prices, i.e. face large slippage. Also, this slippage number would only occur to a half dozen trades per LP. In the future scheduled trading and concentrated liquidity will allow us to provide lower slippage levels to whales.
Recommendation: We suggest defining whale swaps using the above algorithm and setting the maximum slippage to 10% for the 'average' whale swap.
Pros:
Cons:
With outliers addressed, we can go back to setting a 1% Slippage for the vast majority of users. We could use the cutoffs identified earlier, but those rely on a log-normal distribution of swaps. Instead, a simpler option exists: we can identify the 95th percentile of swaps and set the target slippage for those trades to 1%.
query = f"""
WITH
last_recorded_price AS (
SELECT
CURRENCY,
SYMBOL,
MAX(recorded_hour) AS recorded_hour
FROM
osmosis.core.ez_prices
GROUP BY
CURRENCY,
SYMBOL
),
percentile_swaps AS (
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
FROM_AMOUNT / POW(10, FROM_DECIMAL) AS swap_amount,
NTILE(100) OVER (
PARTITION BY
POOL_IDS[0],
FROM_CURRENCY
ORDER BY
FROM_AMOUNT ASC
) AS percentile
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE ())
),
pool_stats AS (
SELECT
pool_id,
denom,
MAX(swap_amount) AS swap_amount_p95
FROM
percentile_swaps
WHERE
percentile = 95
GROUP BY
pool_id,
denom
),
targets AS (
SELECT
pool_id,
denom,
price * swap_amount_p95 AS swap_usd
FROM
pool_stats
INNER JOIN last_recorded_price
ON pool_stats.denom = last_recorded_price.CURRENCY
INNER JOIN osmosis.core.ez_prices prices USING (symbol, recorded_hour)
)
SELECT
pool_id,
ROUND(398 * MAX(swap_usd), 2) AS target_tvl_one_percent_slippage
FROM
targets
WHERE
CAST(pool_id AS INT) IN {tuple(incentivized_pools)}
GROUP BY
pool_id
ORDER BY
target_tvl_one_percent_slippage DESC;
"""
retail_swaps = pd.DataFrame(sdk.query(query).records).set_index('pool_id')
retail_swaps.head(10)
target_tvl_one_percent_slippage | |
---|---|
pool_id | |
704 | 1,700,099.18 |
712 | 1,570,737.56 |
678 | 927,710.07 |
648 | 720,173.98 |
2 | 708,599.6 |
9 | 678,899.88 |
586 | 661,546.22 |
1 | 620,774.75 |
3 | 513,075.51 |
812 | 486,041.74 |
These numbers are typically smaller than our whale constraint, but not in all cases! If the 95th percentile of swaps is more than one-tenth of the largest swap in the LP, then this constraint gives a larger TVL requirement. Lets join our two data sets and see what our target_tvl levels look like.
target_tvls = retail_swaps.merge(
whale_swaps['target_tvl_ten_percent_slippage'], on='pool_id')
target_tvls['target_tvl'] = target_tvls.max(axis=1)
target_tvls['current_tvl'] = target_tvls.index.map(
lambda pool_id: get_current_tvl(pool_id))
target_tvls = target_tvls[
['current_tvl', 'target_tvl']].sort_values(by='target_tvl', ascending=False)
target_tvls.head(10)
current_tvl | target_tvl | |
---|---|---|
pool_id | ||
678 | 25,345,740.67 | 16,155,673.58 |
1 | 67,207,421.69 | 11,299,508.17 |
9 | 3,363,235.22 | 7,203,485.84 |
704 | 11,491,213.02 | 5,343,357.43 |
712 | 12,452,091.19 | 5,343,357.43 |
674 | 4,903,556.51 | 5,140,735.13 |
584 | 1,768,597.03 | 2,441,119.11 |
833 | 3,083,682.78 | 1,936,631.54 |
3 | 1,494,996.95 | 1,927,463.75 |
812 | 2,232,775.19 | 1,314,741.47 |
target_tvls.describe()
current_tvl | target_tvl | |
---|---|---|
count | 45.0 | 45.0 |
mean | 3,455,155.708666667 | 1,646,595.8642222222 |
std | 10,669,130.165339844 | 3,120,264.3935958054 |
min | 27,329.51 | 32,145.87 |
25% | 213,114.45 | 273,498.74 |
50% | 459,021.03 | 503,199.39 |
75% | 1,768,597.03 | 1,289,883.95 |
max | 67,207,421.69 | 16,155,673.58 |
target_tvls.sum()
current_tvl 155,482,006.89000002 target_tvl 74,096,813.89 dtype: float64
target_greater_than_current = target_tvls.loc[
target_tvls.current_tvl < target_tvls.target_tvl].shape[0]
target_less_than_current = target_tvls.loc[
target_tvls.current_tvl > target_tvls.target_tvl].shape[0]
print(
f'Target TVL greater than Current TVL in {target_greater_than_current} LPs.'
)
print(f'Target TVL less than Current TVL in {target_less_than_current} LPs.')
Target TVL greater than Current TVL in 26 LPs. Target TVL less than Current TVL in 19 LPs.
Recommendation: We suggest setting a 1% target slippage for 95% of swaps constraint.
Pros:
Cons:
With the two constraints we've discussed, we have a generalized idea of how to identify liquidity needs for LPs. In the next few sections, we will discuss any nuances that should be taken into account.
Not all liquidity in an LP requires incentives. Only liquidity that is bonded for 1, 7, or 14 days are eligible to receive incentives. Liquidity in an LP can be broken into three categories:
These are users who are in the process of unbonding. Once their unbonding is complete, they move to the unbonded liquidity until they exit the LP or bond again. As incentivize APR is typically far greater than swap fee apr, it is reasonable to assume users who are unbonding will exit the LP once the unbonding is complete.
As swap fee apr approaches or exceeds incentive apr, some users may unbond to avoid the time risk of being bonded while still accuring swap fees as unbonded liquidity.
For an LP, L, we can calculate the current incentivized liquidity as so:
$$ PercIncentivized_L =(1 - PercUnbonding_L - PercUnbonded_L) $$$$ IncentivizedLiquidity_L = CurrentLiquidity_L * PercIncentivized_L $$For target liquidity numbers, we can disregard unbonding liquidity. We simply have to discount the unbonded liquidity. If unbonded liquidity exceeds the current target tvl, then the LP no longer needs incentives.
$$ TargetIncentivizedLiquidity_L = \max(0.0, TargetLiquidity_L - (PercUnBonded_L * CurrentLiquidity_L)) $$The kind folks at Yieldmos have allowed us to use their unbonding data, which can be viewed here.
unbonding_rates = requests.get(
'https://9o9fodjbvd.execute-api.us-east-2.amazonaws.com/bonding?rel',
timeout=60).json()
pool_unbonding_rates = [None] * len(unbonding_rates)
for idx, data in enumerate(unbonding_rates):
pool_unbonding_rates[idx] = {
'pool_id': int(data.get('pool_id')),
'unbonding': round(sum([data.get('1_day_unbonding', [0])[0],
data.get('7_day_unbonding', [0])[0],
data.get('14_day_unbonding', [0])[0]]), 3),
'unbonded': data.get('0_unbonded', [0])[0]}
unbonding_df = pd.DataFrame(pool_unbonding_rates).set_index('pool_id')
unbonding_df
unbonding | unbonded | |
---|---|---|
pool_id | ||
1 | 0.038 | 0.0454 |
3 | 0.011 | 0.0807 |
5 | 0.004 | 0.0981 |
2 | 0.036 | 0.0561 |
13 | 0.006 | 0.1369 |
... | ... | ... |
882 | 0.0 | 0.9943 |
886 | 0.138 | 0.0011 |
877 | 0.0 | 0.9988 |
899 | 0.0 | 0.9887 |
695 | 0.001 | 0.9993 |
187 rows × 2 columns
target_tvls.index = target_tvls.index.astype(int)
adjusted_tvls = target_tvls.merge(unbonding_df, on='pool_id')
adjusted_tvls['current_incentivized_liquidity'] = adjusted_tvls.current_tvl * (1 - adjusted_tvls.unbonding - adjusted_tvls.unbonded)
adjusted_tvls['target_incentivized_liquidity'] = np.where(
adjusted_tvls.target_tvl > (adjusted_tvls.current_tvl * adjusted_tvls.unbonded),
adjusted_tvls.target_tvl - (adjusted_tvls.current_tvl * adjusted_tvls.unbonded),
0.0)
adjusted_tvls = adjusted_tvls.round(2)
adjusted_tvls.head(10)
current_tvl | target_tvl | unbonding | unbonded | current_incentivized_liquidity | target_incentivized_liquidity | |
---|---|---|---|---|---|---|
pool_id | ||||||
678 | 25,345,740.67 | 16,155,673.58 | 0.1 | 0.02 | 22,309,320.94 | 15,704,519.4 |
1 | 67,207,421.69 | 11,299,508.17 | 0.04 | 0.05 | 61,602,322.72 | 8,248,291.23 |
9 | 3,363,235.22 | 7,203,485.84 | 0.03 | 0.02 | 3,177,584.64 | 7,125,458.78 |
704 | 11,491,213.02 | 5,343,357.43 | 0.03 | 0.0 | 11,060,292.53 | 5,291,646.97 |
712 | 12,452,091.19 | 5,343,357.43 | 0.14 | 0.02 | 10,478,434.74 | 5,150,350.02 |
674 | 4,903,556.51 | 5,140,735.13 | 0.01 | 0.01 | 4,786,851.87 | 5,092,680.28 |
584 | 1,768,597.03 | 2,441,119.11 | 0.02 | 0.05 | 1,653,461.36 | 2,359,586.79 |
833 | 3,083,682.78 | 1,936,631.54 | 0.16 | 0.03 | 2,508,884.31 | 1,842,887.58 |
3 | 1,494,996.95 | 1,927,463.75 | 0.01 | 0.08 | 1,357,905.73 | 1,806,817.5 |
812 | 2,232,775.19 | 1,314,741.47 | 0.0 | 0.73 | 605,751.91 | 0.0 |
LPs w/ high unbonding and unbonded numbers have significant differences in their incentivized liquidity numbers. Its advisible to focus on incentivized liquidity numbers when calculating incentives.
Recommendation: Use incentivized liquidity numbers as the determining factor for incentive adjustments.
Pros:
Cons:
While these methods work for already existing LPs, there needs to be a default setting for new LPs until enough volume occurs. Since we are targetting the 95th percentile of swaps, we can require a minium of 20 swaps per day to have the target tvl calculated. This will force any wash traders to perform at least one large swap a day on average to set and move target tvl levels.
Default liquidity level can be set to roughly the first quartile of target TVLs, about $200k at the time of writing. Newer LPs can be assumed to be less popular until traders demonstrate otherwise. LPs that are incentivized but haven't received incentives yet can start off at 0.5% of the LP incentives. At the time of writing, that's about 385 Osmos/day, which would give the LP 65% APR at default liquidity levels. That's higher than most LPs ATM, but bootstrapping typically requires above average APRs. Its easier to over shoot optimal TVL levels and then drawdown, as you avoid the risk of low TVL deflating volume numbers.
Recommendation: Set three parameters for bootstrapping liquidity, default_liquidity, min_daily_swaps, and initial_incentives. These parameters can be set to $200k, 20 swaps/day, and 0.5% of LP incentives.
Pros:
Cons:
Historically, Osmosis has matched external incentives by following a predefined set of rules. Those rules can be seen here:
https://www.mintscan.io/osmosis/proposals/47
https://www.mintscan.io/osmosis/proposals/128
https://www.mintscan.io/osmosis/proposals/133
https://www.mintscan.io/osmosis/proposals/264
External Incentives increase LP APRs without increasing the number of circulating Osmos. The complication here is that additional incentives can cause LPs to have more TVL than necessary. Also, we can't control the continuation of external incentives. If an LP loses its external incentives, it could see a sudden reduction in its TVL, causing the need for a sharp increase in internal incentives.
Despite that, they're generally a positive for Osmosis and help foster interchain relations.
At the moment, our external incentives matching program is nuanced, w/ bias factors and swap fee caps. We propose returning to a simple match, but instead of matching 1:1 or 0.5:1, we can match 0.1:1. This can be a parameter that the community discussions. 0.1:1.0 matching doesn't significantly change current external incentive matching numbers. Its a rough estimate of the external incentives match after the caps and bias factors.
To prevent an excess of Osmo omissions, we can keep the cap on total external incentive matches, 20% of LP incentives. This can also be a tunable parameter.
$$ AdjustedIncentives_L = max(MatchFactor * ExternalIncentivesUSD_L, internalIncentivesUSD_L) $$Such that:
$$ \sum_{L}{MatchFactor * ExternalIncentivesUSD_L} <= ExternalIncentivesCap $$In cases where external incentive matching exceeds the cap, all external incentives will be scaled down proportionally.
$$ ScaledExternalIncentivesUSD_L = \frac{ExternalIncentivesCap}{MatchFactor\sum_{L}{ExternalIncentivesUSD_L}} * ExternalIncentivesUSD_L $$Recommendation: Match External Incentives 0.1:1.0 by ensuring an LP w/ external incentives never has less than the 0.1 Osmo match.
Pros:
Cons:
With our current and target TVL numbers ready, we need to shift our incentives. The simpliest way to do this, is to check if our current tvl is near our target tvl, and if not, shift incentives accordingly. We can use the percent difference (PD) formula.
$$ PD_L = \frac{IncentivizedLiquidity_L - TargetIncentivizedLiquidity_L}{TargetIncentivizedLiquidity_L} $$First our edge case: When $ TargetIncentivizedLiquidity_L = 0.00 $, then $ adjustment = DECREASE $.
When $ -0.05 \le PD_L \le 0.05 $, then $ adjustment = NONE $.
When $ PD_L \ge 0.05 $, then $ adjustment = DECREASE $.
When $ PD_L \le -0.05 $, then $ adjustment = INCREASE $.
We have the direction of our incentivize adjustments, next the magnitude. There's a few caveats we must explore before setting a magnitude.
The Osmosis dEx is the primary utility of Osmosis at the time of writing. In some previous work, we discussed how the value of an Osmo can be connected to the assets paired against it in LPs. This work isn't as accurate now that centralized exchanges list Osmos, but the general thesis still holds.
Thus, when optimizing incentives, we must be mindful of the impact this will have on the Osmo token itself. Our target TVL for the dEx as a whole is lower than its current TVL. Incentivize optimizations would mostly be reductions in incentives, which would lead to unbonding of GAMMs. This will change the composition of the Osmo's backing and potentially its value if care is not taken.
I think its best for the community to be informed of potential risks up front, and then hear their thoughts/concerns on the topic and act accordingly.
Lets look at the current and target composition of the incentivized LPs:
def get_osmo_lp_token(pool_id: int) -> Text:
"""Returns the non-Osmo token in an Osmo/X LP."""
pool_info = requests.get(
f'https://api-osmosis.imperator.co/pools/v2/{pool_id}',
timeout=60).json()
if len(pool_info) != 2:
raise 'LP has more than two tokens in it.'
for token in pool_info:
if token['symbol'] == 'OSMO':
continue
return token['symbol']
osmo_backing = target_tvls.copy()
osmo_backing['token'] = osmo_backing.index.map(lambda lp: get_osmo_lp_token(lp))
# For readability, group tokens providing less than 1% of total TVL.
other_cutoff = osmo_backing.current_tvl.sum() / 100
osmo_backing['token'] = np.where(
osmo_backing['current_tvl'] <= other_cutoff, 'OTHER', osmo_backing['token'])
osmo_backing.head(10)
current_tvl | target_tvl | token | |
---|---|---|---|
pool_id | |||
678 | 25,345,740.67 | 16,155,673.58 | USDC |
1 | 67,207,421.69 | 11,299,508.17 | ATOM |
9 | 3,363,235.22 | 7,203,485.84 | CRO |
704 | 11,491,213.02 | 5,343,357.43 | WETH |
712 | 12,452,091.19 | 5,343,357.43 | WBTC |
674 | 4,903,556.51 | 5,140,735.13 | DAI |
584 | 1,768,597.03 | 2,441,119.11 | SCRT |
833 | 3,083,682.78 | 1,936,631.54 | stOSMO |
3 | 1,494,996.95 | 1,927,463.75 | OTHER |
812 | 2,232,775.19 | 1,314,741.47 | AXL |
target_osmo_backing = target_tvls.copy()
target_osmo_backing['token'] = osmo_backing.index.map(lambda lp: get_osmo_lp_token(lp))
# For readability, group tokens providing less than 1% of total TVL.
target_other_cutoff = target_osmo_backing.target_tvl.sum() / 100
target_osmo_backing['token'] = np.where(
target_osmo_backing['target_tvl'] <= other_cutoff, 'OTHER',
target_osmo_backing['token'])
target_osmo_backing.head(10)
current_tvl | target_tvl | token | |
---|---|---|---|
pool_id | |||
678 | 25,345,740.67 | 16,155,673.58 | USDC |
1 | 67,207,421.69 | 11,299,508.17 | ATOM |
9 | 3,363,235.22 | 7,203,485.84 | CRO |
704 | 11,491,213.02 | 5,343,357.43 | WETH |
712 | 12,452,091.19 | 5,343,357.43 | WBTC |
674 | 4,903,556.51 | 5,140,735.13 | DAI |
584 | 1,768,597.03 | 2,441,119.11 | SCRT |
833 | 3,083,682.78 | 1,936,631.54 | stOSMO |
3 | 1,494,996.95 | 1,927,463.75 | AKT |
812 | 2,232,775.19 | 1,314,741.47 | OTHER |
fig = make_subplots(rows=1, cols=2,
specs=[[{'type':'domain'}, {'type':'domain'}]])
fig.add_trace(go.Pie(values=osmo_backing.current_tvl,
labels=osmo_backing.token),
1, 1)
fig.add_trace(go.Pie(values=target_osmo_backing.target_tvl,
labels=target_osmo_backing.token),
1, 2)
fig.update_layout(title='Current vs Target Osmo Backing')
fig.show()
That's a significant shift in backing. Atom would lose roughly a third of its share, where as USDC, minor tokens (OTHER), and CRO, would see significant jumps forward. Its not my place to call this 'good' or 'bad', but I believe its important to note this to community. Its possible the Osmosis community may wish to prioritize this side effect over a incentive optimization. I.e. providing excessive incentives to LPs whose tokens they approve of.
Providing incentives to an Osmo/X LP increases the correlation between that token, X, and the Osmo's price performance. This optimization process is price blind. The existing incentives process has a category system, which allotted more Osmos to LPs considered to be 'Major'. In the future, if available LP incentives are not sufficient to incentivize all LPs, a prioritization system can be implemented here as well if that is preferred.
The current composition and shifts in composition of the Osmo can be viewed in this dashboard.
Recommendation: Don't implement a token priority system unless absolutely necesary. As market conditions improve, volume should increase, and 'high-value' tokens will have higher volumes, which leads to higher swap fees and target TVL, which will utlimately correct itself.
Pros:
Cons:
Aside from the shift of tokenomics, there is another concern w/ reducing the liquidity of an LP. When users exit the LP, they will realistically move their tokens elsewhere. If users opt to sell the Osmo half of their GAMMs, and keep the token X, this will negatively impact the price of the Osmo. The Osmo price reducing can cause an increase of Impermanent Loss for LPers and increase the percieved risk of LPing, which in the long term, may cause a net increase in incentives provided.
There are two ways to counteract such concerns. One is to provide an alternative utility to LPing. Staking, and soon lending, are examples. Increasing staking APR to draw LPers to staking could even have a positive price impact (i.e. LPer unbonds from the Osmo/X LP, sells their X for Osmos, and then stakes all their Osmo). The con to increasing staking APR is the net savings in LP incentive reductions would be negated as well. This is a short term solution. Ideally, additional features and value accural mechanisms are added to the protocol over time, which reduces the reliance on LPs for providing value to the Osmo and reduce price risk.
The second method is to increase the non-Osmo liquidity in LPs. This counteracts the net TVL reduction the optimizations program currently calls for. It would only be a realistic solution if volume significantly increased. In the long term, a combination of new features + increased volume is ideal.
Ultimately, its up to the community to decide what price risks they are willing to take. There's no surefire way of determining if unbonding LPers will sell their Osmos or not.
Recommendation: Implement a tapering factor parameter, set to 10% initially, that limits the monthly change in incentives an LP can have.
Pros:
Cons:
Recommendation: We've also implemented a maintain optimization mode. This mode sets target tvl equal to current tvl and adjusts incentives as needed to keep GAMM numbers steady. I don't suggest using this off the bat, but we can switch to it if/when the community wishes to.
Pros:
Cons:
Incentives can be an inefficient way of attracting liquidity to LPs. This is because a 10% increase of incentives does not necessarsily correlate to a 10% increase of GAMMs/TVL. If an LPs ROI is significantly lower than the rates potential LPers expect, small shifts in incentives will not draw new liquidity. The inverse is also true. If an LPs ROI is significantly higher than the rates current LPers desire, small shifts in incentives will not cause significant unbonding.
The latter can be seen by looking at historical GAMM and incentive values:
counts = pd.DataFrame(requests.get(
'https://api.hathornodes.com/resources/osmosis/gamm_incentive_counts/1/',
timeout=60).json())
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(
go.Scatter(x=counts.start_time, y=counts.n_gamms, name="GAMMS"),
secondary_y=False)
fig.add_trace(
go.Scatter(x=counts.start_time, y=counts.n_incentives, name="Incentives"),
secondary_y=True)
fig.update_layout(title_text="Historical GAMMs and Incentives in LP #1")
fig.update_xaxes(title_text="Epoch Date")
fig.update_yaxes(title_text="<b>GAMMs</b>", secondary_y=False)
fig.update_yaxes(title_text="<b>Daily Osmo Incentives</b>", secondary_y=True)
fig.show()
Incentive numbers in LP #1 have decreased by roughly 83% since May 20th (156k -> 27k Osmo/Day). Yet the GAMM count, number of LP shares held atm, hasn't even dropped 50% (460 Million -> 260 Million). This inefficieny is similar to inelasticity in the price of goods. Simply put, the number of GAMMs are not highly responsive to the number of incentives. We can even see an example where a decrease of incentives is met w/ an increase of gamms:
counts = pd.DataFrame(requests.get(
'https://api.hathornodes.com/resources/osmosis/gamm_incentive_counts/463/',
timeout=60).json())
fig = make_subplots(specs=[[{"secondary_y": True}]])
# Add traces
fig.add_trace(
go.Scatter(x=counts.start_time, y=counts.n_gamms, name="GAMMS"),
secondary_y=False)
fig.add_trace(
go.Scatter(x=counts.start_time, y=counts.n_incentives, name="Incentives"),
secondary_y=True)
fig.update_layout(title_text="Historical GAMMs and Incentives in LP #463")
fig.update_xaxes(title_text="Epoch Date")
fig.update_yaxes(title_text="<b>GAMMs</b>", secondary_y=False)
fig.update_yaxes(title_text="<b>Daily Osmo Incentives</b>", secondary_y=True)
fig.show()
You would think that decreasing incentives would always cause unbonding, but remember that the APRs required to retain LPers is dependent on their risk perception of the underlying tokens themselves. Its possible that market perception of the unlying assets become increasingly bullish overtime such that decreasing incentives doesn't deter additional liquidity from joining the LP.
Also, the addition of external incentives or swap fees increasing could make an LP more attractive to LPers even if internal incentives are reduced.
elasticity = pd.DataFrame(requests.get(
'https://api.hathornodes.com/resources/osmosis/lp_elasticity', timeout=60
).json())
elasticity.set_index('pool_id').round(2).head(10)
delta_gamms | delta_incentives | elasticity | |
---|---|---|---|
pool_id | |||
42 | 0.78 | 0.07 | 11.48 |
641 | -0.29 | -0.03 | 8.84 |
481 | -0.11 | -0.02 | 5.96 |
463 | 1.78 | -0.34 | 5.16 |
730 | 2.42 | 1.67 | 1.45 |
627 | 0.05 | 0.03 | 1.39 |
577 | -0.16 | -0.12 | 1.33 |
648 | -0.0 | 0.0 | 1.2 |
497 | 0.13 | -0.16 | 0.79 |
7 | -0.11 | -0.15 | 0.75 |
As the table shows, very few LPs have an elasticity above 1.0. Of those who do, only LPs 577 and 42 have delta_gamms and delta_incentives in the same direction (i.e. both positive or both negative).
Recommendation: Would table the use of elasticity for now. While there's lots of indicators that inelasticity exists in our incentives, that only allows for steeper cuts/addition of incenitves, which goes against the tapering factor suggested before for tokenomics reasons.
In the future, we can use elasticity to assist w/ bootstrapping LPs and searching for a healthy starting liquidity/incentives.
Pros:
Cons:
While elasticity isn't of immeditate use to us, the GAMM velocity data is. When adjusting incentives, noting the recent change in GAMMs can help prevent over-adjusting. If we know an LP has been quickly losing/gaining GAMMs in the last 90 days, we can modify our adjustments accordingly.
A simple set of business logic that looks at our liquidity PD and GAMM Velocity, v, can be utilize to determine the adjustment type. We can simply use the PD, capped by the tapering factor to determine the magnitude of the adjustment as well.
Lets look at our algorithm from before:
First our edge case: When $ TargetIncentivizedLiquidity_L = 0.00 $, then $ adjustment = DECREASE $.
When $ -0.05 \le PD_L \le 0.05 $, then $ adjustment = NONE $.
When $ PD_L \ge 0.05 $, then $ adjustment = DECREASE $.
When $ PD_L \le -0.05 $, then $ adjustment = INCREASE $.
Now if our GAMM velocity is in the same direction as our adjustment, we can scale down our adjustment:
Let $ \delta_g$ be the GAMM Velocity.
When $ \delta_g \le -0.05 $ & $ PD_L \ge 0.05 $, then adjust incentives slighty.
When $ \delta_g \ge 0.05 $ & $ PD_L \le -0.05 $, then adjust incentives slighty.
For decreasing incentives, our adjustment can look like this:
$$ TargetIncentives_L = 1 - \min(TaperingFactor, |PD_L|) $$For increasing incentives, our adjustment can look like this:
$$ TargetIncentives_L = 1 + \min(TaperingFactor, |PD_L|) $$If making a slight adjustment, $ PD_L $ factor can be scaled down by a factor. Alternatively, we can adjust $ PD_L $ by the $ \delta_g $. I.e. if the GAMM count is down 2% in the last 90 days, and we need another 5% reduction in TVL, we can adjust incentives by 3% and assume the trend in GAMM velocity will remain.
Recommendation: Account for GAMM velocity when adjusting incentives. A simple scaling factor should suffice.
Pros:
Cons:
With the gist of our algorithm defined, we must ensure that wash trading is not a profitable/realistic endeavor. Let's start w/ our 95th percentile target swap.
To increase incentives in an LP, the wash trader must calculate the max target tvl (1.1 * current tvl) of their LP. Then they must identify the necessary swap size (~1/398th of the max target tvl), and then make as many swaps as necessary to set the 95th percentile of swaps to their desired swap size.
Lets use LP # 1 as a starting point.
query = """
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
COUNT(*) AS n_swaps
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND POOL_IDS[0] = '1'
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE ())
GROUP BY
POOL_IDS[0],
FROM_CURRENCY;
"""
swap_counts = pd.DataFrame(sdk.query(query).records).set_index('pool_id')
swap_counts
denom | n_swaps | |
---|---|---|
pool_id | ||
1 | ibc/27394FB092D2ECCD56123C74F36E4C1F926001CEAD... | 307400 |
1 | uosmo | 116019 |
At the time of writing, the Osmo side of LP #1 has the least swaps at 116,230. The 95th percentile of swaps is roughly the top 5,812 swaps.
print(1.1 * target_tvls.loc[target_tvls.index == 1].current_tvl)
print(1.1 * target_tvls.loc[target_tvls.index == 1].current_tvl / 398)
pool_id 1 73,928,163.859 Name: current_tvl, dtype: float64 pool_id 1 185,749.1554246231 Name: current_tvl, dtype: float64
Our target TVL is 65,446,698.67 USD w/ a target swap size of 164,438.95 USD, which is about 198,119.22 Osmos. The inequality for the number of wash swaps needed is $ \frac{WashSwaps_t}{Swaps_t + WashSwaps_t} \ge 0.05 $. This is incomplete though, as we need to account for swaps already above our wash swap value.
How many swaps are already above this number?
query = """
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
COUNT(*) AS n_swaps
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND POOL_IDS[0] = '1'
AND FROM_CURRENCY = 'uosmo'
AND FROM_AMOUNT / POW(10, FROM_DECIMAL) >= 198119.22
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE ())
GROUP BY
POOL_IDS[0],
FROM_CURRENCY;
"""
pd.DataFrame(sdk.query(query).records).set_index('pool_id')
denom | n_swaps | |
---|---|---|
pool_id | ||
1 | uosmo | 8 |
Eleven swaps already meet our criteria. The adjusted inequality is:
$$ \frac{WashSwaps_t - EligibleSwaps_t}{Swaps_t - EligibleSwaps_t + WashSwaps_t} \ge 0.05 $$Simply put, we pretend as if the 11 eligible swaps are part of our wash trading process. So we subtract them from both the swap count and the wash swap count.
Note that we used the 95th percentile so 0.95 is p and 0.05 is 1 - p. So if once wishes to use a different percentile later, you can generalize this to:
$$ WashSwaps_t \ge \frac{1 - p}{p}Swaps_t + EligibleSwaps_t $$def calc_num_wash_swaps(
swaps: int, eligible_swaps: int, percentile: float) -> int:
"""Calculate the smallest integer that solves the wash trading inequality.
Args:
swaps: Number of swaps performed in the LP.
eligible_swaps: Swaps performed in the LP greater than the target.
percentile: The target percentile of swaps.
Returns:
Number of wash swaps needed to make the pth percentile of swaps equal
to our target given s swaps already in the LP and e swaps at or
above the target.
"""
return math.ceil((1 - percentile) * swaps / percentile + eligible_swaps)
calc_num_wash_swaps(116230, 11, 0.95)
6129
6,129 swaps is a pretty large number of swaps! Remember that swap fees are 0.2%, which adds up quickly.
print(f'The cost of wash trading is ${round(6129 * 0.002 * 164438.95, 2)}.')
The cost of wash trading is $2015692.65.
~2 Million dollars is fairly high. This assumes the person owns enough Osmos to never need to swap back either, which in this case is impossible (198,119.22 Osmos * 6,129 is larger than the number of Osmos in supply at the time of writing). There's always a catch though. If one is wash trading, its because they have equity in the LP. We must now solve their profitability inequality.
Let E = The % equity the wash trader(s) have in the LP. Let C = Swap fee cost of wash trading.
$$ C_L * (1 - E_L) \le 0.1 * CurrentIncentives_L * E_L $$Basically, you wash trade to increase the incentives by 10% (our tapering factor). You only receive E percent of the increased incentives. That's being generous and assuming the increase of incentives will not increase total GAMMs at all. Lets see re-arrange this equation to find $ E_L $.
$$ C_L * (1 - E_L) \le E_L * 0.1 * CurrentIncentives_L $$$$ C_L - C_L * E_L \le E_L * 0.1 * CurrentIncentives_L $$$$ C_L \le 0.1 * CurrentIncentives_L * E_L + C_L * E_L $$$$ C_L \le (0.1 * CurrentIncentives_L + C_L) * E_L $$$$ \frac{C_L}{0.1 * CurrentIncentives_L + C_L} \le E_L $$$$ E_L \ge \frac{C_L}{0.1 * CurrentIncentives_L + C_L} $$We know the Atom/Osmo LP receives about 26968 Osmos/Day at the time of writing. That's worth 22383.44 USD. The increase would be worth 2238.34/day then. Over 90 days (The timeframe of analysis is how long your wash trading is good for), that's 201450.60 USD in increased incentives. Our $ C_L $ is 2015692.65 USD.
def calc_minimum_wash_equity(cost: float, incentives: float) -> float:
"""Calculate the minimum % equity a wash trader needs in an LP to profit.
Args:
cost: The swap fee USD cost of wash trading.
incentives: The increase in incentives in USD from wash trading.
Returns:
min_equity: Minimum equity required to profitably wash trade.
"""
return cost / (incentives + cost)
100 * round(calc_minimum_wash_equity(2015692.65, 201450.60), 3)
90.9
You need about 90.9% of the LP to profitably wash trade. Not exactly a realistic endeavour. Note that 0.1 is our tapering factor, a parameter, and CurrentIncentives_L multiples daily incentives by our timeframe parameter. So we can rewrite this as $ T * CurrentDailyIncentives_L * t $ where T is our tapering factor and t is the number of days used in the analysis. Useful to note incase one wishes to test for wash trading against a wider variety of parameters.
Its also a good idea to check an LP w/ smaller swap numbers just to be safe.
query = f"""
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
COUNT(*) AS n_swaps
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND CAST(POOL_IDS[0] AS INT) IN {tuple(incentivized_pools)}
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE ())
GROUP BY
POOL_IDS[0],
FROM_CURRENCY
HAVING
COUNT(*) >= 20 * 90 -- Note the minimum swap requirement.
ORDER BY
n_swaps;
"""
pd.DataFrame(sdk.query(query).records).set_index('pool_id').head(10)
denom | n_swaps | |
---|---|---|
pool_id | ||
625 | uosmo | 1844 |
7 | ibc/7C4D60AA95E5A7558B0A364860979CA34B7FF8AAF2... | 1939 |
833 | uosmo | 2010 |
42 | ibc/1DCC8A6CB5689018431323953344A9F6CC4D0BFB26... | 2065 |
577 | uosmo | 2136 |
553 | ibc/9989AD6CCA39D1131523DB0617B50F644208116229... | 2179 |
634 | ibc/65381C5F3FD21442283D56925E62EA524DED8B6927... | 2182 |
586 | uosmo | 2288 |
2 | uosmo | 2483 |
463 | ibc/1DC495FCEFDA068A3820F903EDBD78B942FBD204D7... | 2494 |
Our lucky winner is LP # 833, the stOSMO / OSMO LP, w/ 1836 swaps.
print(1.1 * target_tvls.loc[target_tvls.index == 833].current_tvl)
print(1.1 * target_tvls.loc[target_tvls.index == 833].current_tvl / 398)
pool_id 833 3,392,051.058 Name: current_tvl, dtype: float64 pool_id 833 8,522.741351758794 Name: current_tvl, dtype: float64
We're looking at 7,409.40 USD for a target swap, about 8926.99 Osmos. Lets calculate our number of eligible swaps.
query = """
SELECT
POOL_IDS[0] AS pool_id,
FROM_CURRENCY AS denom,
COUNT(*) AS n_swaps
FROM
osmosis.core.FACT_SWAPS swaps
WHERE
TX_SUCCEEDED
AND POOL_IDS[0] = '833'
AND FROM_CURRENCY = 'uosmo'
AND FROM_AMOUNT / POW(10, FROM_DECIMAL) >= 8926.99
AND FROM_CURRENCY <> TO_CURRENCY
AND BLOCK_TIMESTAMP >= DATEADD(day, -90, GETDATE ())
GROUP BY
POOL_IDS[0],
FROM_CURRENCY;
"""
pd.DataFrame(sdk.query(query).records).set_index('pool_id')
denom | n_swaps | |
---|---|---|
pool_id | ||
833 | uosmo | 9 |
print(f'Number of wash swaps required: {calc_num_wash_swaps(1836, 9, 0.95)}.')
print(
f'The cost of wash trading is $'
# Note that this LP has 0.3% swap fees.
f'{round(calc_num_wash_swaps(1836, 9, 0.95) * 0.003 * 7409.40, 2)}.')
Number of wash swaps required: 106. The cost of wash trading is $2356.19.
Only 106 wash swaps needed, a bit concerning. This LP receives 188 Osmos/Day in incentives.
Recall our formula $ T * CurrentDailyIncentives_L * t $, and lets plug it in.
calc_minimum_wash_equity(2356.19, 0.1 * 188 * 0.83 * 90)
0.6265546263179588
About 62.7% of the LP is needed to profit from wash trading. A lower number than before, but still a strong majority of the LP.
Recommendation: With a minimal swap requirements in place, along w/ swap fees, wash trading is not a practical endeavor. Shifting the whale swaps is a bit trickier to calculate, but keep in mind these calculations shown assume the wash trader never has to perform extra swaps (i.e. swap Osmo -> stOsmo -> Osmo so they can continue wash trading). Also, it doesn't account for slippage during the swapping process. Slippage would increase exponentially with each swap performed, causing this to be fairly costly to wash trade.
Pros:
Cons:
With the algorithm's pieces fleshed out, lets discuss the long term. Incentives are meant to be a short term way of maintaining liquidity. When can we go w/out any incentives? When Osmosis has sufficient volume to support healthy liquidity, in both bear and bull markets, then incentives can be a thing of the past. With about 5.5 years of emissions left, plus a community pool able to fund incentives, this is a practical endeaver.
Note that most LPs have a swap fee of 0.2%. That means $1000 of volume provides $2 in swap fees. Currently, it holds at about 25% APR. With a target tvl of about 11.5 million USD, it would require 2,875,000 USD in annual swap fees to support itself. That's about 1.44 Billion in volume. With multi-hop discounts, that number could be up to 2.88 Billion. This places our daily volume requirement between 3.95 Million and 7.89 Million USD. At the time of writing, the Atom / Osmo LP has 5 Million USD in volume in the last day!
For the Atom / Osmo and USDC / Osmo LPs, we're not too far off. Some smaller LPs are closer to their goals than you'd think as well, see the target incentivized tvl numbers. As more and more traders are drawn to Osmosis, LPers can trust in swap fees to provide a sufficient ROI. The con to this approach is the reliance on swap fees can cause liquidity to come and go w/ market conditions. Continously having to draw in trading volume can be difficult.
An alternative is for Osmosis to start investing in protocol owned liquidity. With a community pool, there's a relatively simple way to go about this. We can use Osmos in the community pool to join LPs and send the GAMMs back to the community pool. The protocol now owns a portion of the LP and will accure parts of the swap fees as well. The protocol doesn't need to profit off LPing, the liquidity draws users to our other products (lending, margin, etc...), and those products become our profit centers. It would be costly to single-asset join LPs w/ just Osmos, so Osmosis could enlist other protocols to join us. For example, instead of providing external incentives, the Stargaze community could opt to match our Osmos and join the Stars/Osmo LP w/ us. Each protocol could then retain half the resulting GAMMs in their community pools. This provides an alternative mechanism for protocols to have sources of liquidity.
There's a few concerns w/ this. One, you'd need a high level of confidence in the other token to undergo such a process. Adding liquidity to an LP would increase in the influence that asset has on the Osmo price. If we err and invest in liquidity in a token that goes to 0, we'll pay a high cost in community Osmos, and that'll hurt the value of an Osmo later as we've introduced more Osmos to circulating supply from the community pool.
Recommendation: Sticking to the swap fee long term plan is probably ideal for now. Protocol Owned Liquidity has its merits, but would require some serious initial efforts to set up and select assets to start with. We've seen incentivized LPs suffer significant impermanent loss due to under performing assets. The community would need a process far more selective than LP incentivizing to whitelist something for actual protocol ownership, and that doesn't exist at the moment.
With everything said, what is the immeditate impact of using this incentive optimization process? The first set of changes would save roughly 7,000 Osmos/day in incentives. Focusing on the Osmos saved isn't ideal though, as some LPs will see increases in incentives received and that isn't a bad thing.
The primary benefit to switching incentive processes, is that we've now gained the ability to define healthy liquidity levels and a point in time where incentives are no longer needed in an LP. Also, this process' approval will allow for the formation of an OGP backed Incentives Working Group, which will continue to fine tune and improve this process.
One of the hardest things in managing and improving on a product is having the proper metrics for assessing it. This works primary focus was on defining and measuring the health of the LPs from the perspective of traders, its primary users.
This research is far from done. There's a series of things we'd like to continue researching and building out over the remainder of this grant period - and onward w/ the Incentives Working Group.
As time passes, Osmosis churns LPers. This is natural w/ any product, but in our case it has an interesting side effect. Measuring the number of GAMMs that belong to inactive users could be useful for minimizing incentives. Its risky, as you never know if they'll come back, but we could potentially treat inactive GAMMs similar to unbonded GAMMs.
This would be a rough estimation:
$$ IncentivizedLiquidity_L = CurrentLiquidity_L * (1 - PercUnbonded_L - PercUnbonding_L - PercInactive_L) $$As bonded inactive GAMMs will receive incentives, a direct substraction is not optimal. Further research on the amount of inactive GAMMs as well as a proper way of accounting for them needs to be done.
Tracking the Volume/Liquidity and Liquidity/Incentives ratios of LPs overtime is an important dimension to this. Ideally, we'd like to see volume grow while liquidity decreases towards target tvl levels. At some point, if volume continues to grow, liquidity should start to follow as swap fees draw LPers in regardless of incentives.
On the Liquidity/Incentives side, we'd like to be paying less and less for liquidity. Measuring what we pay for liquidity in individual LPs as well as on the dEx is important to determining the efficiency of LPs. These metrics aren't included in this version of the algorithm as it hones in on achieving target tvl regardless of incentive efficiency - similar to the point made about elasticity in an earlier section.
A future version of this algorithm can incorporate those values when looking to 'discover' new healthy tvl levels.
Liquidity draws volume, to an extent. Deeper LPs draw larger trades, i.e. more volume. The cost of incentivizing fairly deep LPs is steep, as seen in our 1% Slippage for Max Swaps examples. With our set of existing LPs, there isn't much need to worry about how to properly boost strap LPs. But the next version of this algorithm ideally will improve on the bootstrapping section and allow for sharper increases of incentives to newer LPs in an attempt to locate local maxima in the volume/liquidity curve.
Stableswap LPs are live! The concepts in this algorithm extend to Stableswap, and Concentrated Liquidity as we'll discuss next, but there are some caveats. Slippage in a stableswap is based on the size of the swap as well as the ratio of tokens in the LP. Calculating the slippage is a different process but doable. The only point of concern is that it would be ideal to keep the ratio of tokens in a Stableswap LP even. This keeps slippage low for all the assets in the LP. Unforunately, we cannot incentive providing liquidity for a particular asset in an LP. Thus it could be costly to incentivize Stableswap LPs that are highly imbalanced.
Concentrated Liquidity will help increase the capital efficiency of LPs. This would allow for lower target tvls and a quicker route to 0-incentives. Some research into the price ranges users select and how to map that back to liquidity will be necessary before an exact process can be created. As they're more efficient than Balancer LPs, this isn't a point of immeditate concern. We can use target tvl numbers from the Balancer process while that research is done. This gives time to observe how market makers behave w/ our version of Concentrated Liquidity and then set target tvl levels accordingly.
The big question/hope here is that its so efficient, we can accommodate whale swaps w/ even lower slippage, and thus have simpler liquidity constraints. In an ideal world, we can use the 1% slippage for max swap methodology to set target tvl levels. In practice, wash trading concerns prevent this, but something similar can be created that is more resistent to wash trading. The more traders we can provide low slippage to, the more volume we can capture.
Its been an honor and priviledge to work on this for the Osmosis Community. While this document doesn't contain all the research efforts made, I hope it covers the primary topics of interest for users evaluating this algorithm. If you have any questions about the research or the resulting algorithm, feel free to reach out to its author on Telegram, @Hathor93.