This document contains the updates to the incentive process since the publishing of this paper. The primary changes that have occured are:
import math
from dataclasses import asdict, dataclass
from enum import Enum
from typing import Any, Callable, Collection, Mapping, Text
import requests
import plotly.express as px
OSMO_LCD_ENDPOINT = 'https://lcd.osmosis.zone/'
Originally, constraints where defined as as a maximum slippage for a given swap size. In the example of our retail swap constraint, we originally said 0.1% slippage for the 95th percentile of swaps. This constrainted our ability to define constraints outside of the realm of swaps. With TWAP pricing oracles being used for lending protocols and other deFi apps, this definition had to be expanded:
@dataclass
class LiquidityConstraint:
"""Wrapper for constraints that holds arguments and callable."""
# Function that returns minimum liquidity for LPs.
constraint_func: Callable[[Any], Mapping[int, float]]
# Parameters for the calc_constraint function.
args: Mapping[Text, Any]
def calc_constraint(self) -> Mapping[int, float]:
"""Calculate constraint values using predefined method and args."""
return self.constraint_func(**self.args)
Now, any constraint can be defined as long as its function outputs a mapping of pool_id to minimum liquidity. We can run a series of constraints and then pick the largest constraint for each pool_id. This way we have the smallest viable liquidity for an LP that meets all our constraints.
def _select_max_constraint(
liquidity_constraints: Collection[Mapping[int, float]]
) -> Mapping[int, float]:
"""Converts collection of constraints to the max constraint for each LP."""
def reducer(result, record):
for pool_id, liquidity in record.items():
result[pool_id] = max(
result.get(pool_id, 0.0), liquidity)
return result
return reduce(reducer, liquidity_constraints)
Initial efforts on optimizing stableswap LPs have been implemented. Originally, Stableswap LPs shared 4% of the incentives emitted to LPs. I.e. if 24 Osmos are proposed for the Balancer LPs, an additional Osmo (1 / (24 + 1) = 1 / 25 = 4%) would be split among any Stableswap LPs.
Osmosis its Stableswap incentive calculations to use a similar process to the one used for balancer pools. The process demonstrates that a Stableswap LP can achieve lower slippage (0.2% vs 1%) than its balancer counterpart while having about 47% of the liquidity. We can roughly replicate that process by having our swap multiplier process check the pool type and create a custom calculation for each pool type:
@dataclass
class PoolType(Enum):
"""Identifies the underlying mechanism for an LP."""
BALANCER = '/osmosis.gamm.v1beta1.Pool'
STABLESWAP = '/osmosis.gamm.poolmodels.stableswap.v1beta1.Pool'
# TBD once Concentrated Liquidity is Live
CONCENTRATED = ''
Here's a simple example where we adjust the balance swap multiplier to get a stableswap multiplier:
def calculate_swap_multiplier(
pool_type: PoolType, slippage: float, swap_fee: float) -> float:
"""Calculate the swap multiplier required to achieve target slippage."""
multiplier = calc_balancer_swap_multiplier(slippage, swap_fee)
if pool_type == PoolType.STABLESWAP:
multiplier *= 0.47
return multiplier
Community feedback on the swap data based constraints were generally positive. There was a general sentiment that the constraints should have lower slippage levels to be competitive w/ other dExes. Based on community feedback and suggestions, the target swaps were reduced to 0.25% and 2.5% for retail and whale swaps respectively. This increases the minimum liquidity level for LPs, but allows for a more competitive experience for users.
With the introduction of Mars' lending outpost, Osmosis has entered a new era. DeFi apps are allowing users to do more w/ their assets than LP, buy, or sell. This comes with an increased reliance on our on chain pricing oracles. Security is paramount to gaining and maintaining users. Security can be costly though as liquidity requirements are steep. Its advised to recommend deFi apps stick to using LPs that have high volume so incentivizing them isn't too costly for the protocol.
For the initial TWAP resistance constraint, we selected the internally incentivized LPs currently used by Mars:
Typically, TWAP manipulation attacks involve manipulating the price of a TWAP oracle to extract value from deFi apps relying on the TWAP oracle. Using a lending app as an example, an attacker could:
Forunately, TWAP oracles are not easy to manipulate. The spot price of an asset has to be moved significantly without arbitrage bots interferring. When assessing the viability of attacks, we typically assume some period of time without arbitrage and then check if the capital requirement (i.e. Value of assets) to conduct such an attack is feasible.
For this constraint, the following attack scenario will be used to determine minimum liquidity requirements:
Osmosis offers both arithmetic and geometric mean TWAPs. The Arithmetic mean of a set of numbers is always equal to or greater than its geometric mean and is more susceptible to manipulation by large outliers. For this reason, we will focus on the Geometric mean for this analysis. While Osmosis doesn't directly use this formula (they use an alternative method documented here), the geometric TWAP of an asset can be calculated as:
$$ TWAP = \Pi_{i=0}^n \sqrt[n]{p_i} $$Where n is the number of datapoints in the timeframe (think spot price per block), and $ p_i $ is the spot price at the ith block.
Now, imagine an attacker wishes to maniuplate the TWAP of an asset for a specific LP. They coordinate their attack such that m of the n blocks in the TWAP timeframe will not have any arbitrage transactions (think validators coordinating to block swap txs or the attacker spamming txs to fill up the queue of transactions in the mempool).
The attacker has to calculate a desired TWAP for the asset. This TWAP enables a profitable attack as described before. To achieve that TWAP, they must hold a manipulated spot price, q, for the duration of their attack. For the rest of the TWAP timeframe, we will assume the spot price is p, its pre-attack price.
Lets make some substitutions and rearrange our earlier equation:
Let $ p $ be the current spot price of the asset.
Let $ TWAP $ be the desired TWAP of the asset.
Let $ q $ be the desired spot price of the asset (i.e. the spot price, that if held for the entire attack, will move the TWAP from $ p \rightarrow p' $).
Let $ n $ be the number of blocks in the TWAP time period.
Let $ m $ be the number of manipulated blocks (i.e. no arbs) in the TWAP time period.
$$ TWAP = \Pi_{i=0}^n \sqrt[n]{p_i} $$The price will be p for n-m blocks and q for m blocks:
$$ TWAP = \sqrt[n]{p^{(n-m)}q^m} $$$$ \frac{TWAP}{\sqrt[n]{p^{(n-m)}}} = \sqrt[n]{q^m} $$We raise both sides to the nth power to remove the root:
$$ \frac{TWAP^n}{p^{(n-m)}} = q^m $$Now we take the mth root of both sides to isolate q:
$$ \sqrt[m]{\frac{TWAP^n}{p^{(n-m)}}} = q $$Switch the sides of our equation and we get a formula for q:
$$ q = \sqrt[m]{\frac{TWAP^n}{p^{(n-m)}}} $$Now, lets simplify this a bit further by redefining TWAP in terms of p. As they are both real numbers, we know there exist some number, k, such that $ TWAP = kp $.
$$ q = \sqrt[m]{\frac{(kp)^n}{p^{(n-m)}}} $$$$ q = \sqrt[m]{\frac{k^np^n}{p^{(n-m)}}} $$Using our law of exponents to simplify the two p terms in our fraction:
$$ q = \sqrt[m]{k^np^m} $$The mth root of p to the mth power is simply p:
$$ q = \sqrt[m]{k^n}p $$The mth root of a number is the same as that number raised to the 1/mth power so:
$$ q = k^{\frac{n}{m}}p $$We could use the root term for k instead of the fractional exponent, but the ratio of n:m will come in handy later. Our simplifications have allowed us to define q as a constant, $ k^{\frac{n}{m}} $, times p. This lets us think of our desired spot price as a proportional change in the current price of the asset.
We're assuming a 30 minute TWAP window w/ 3 minutes of no arbitrage. Assuming uniform block time, we can simplify this to say 10% of the TWAP window does not have any aribtrage (i.e. $ m = \frac{n}{10}). Let's simplify our formula one more time:
$$ q = k^{\frac{n}{\frac{n}{10}}}p $$$$ q = k^{10}p $$For our attack simulation, q is the 10th power of k times p. We defined k as a multiple of p. For the sake of demonstration, lets say our attacker wished to increase the TWAP price of our asset by 40%.
$$ q = (1 + 0.40)^{10}p = 1.40^{10}p \approx 28.93p $$A 40% price manipulation when 10% of the TWAP timeframe has no arbitrage requires an approximately 28.93 times increase in the spot price of an asset! That's easier said than done, but attackers tend to have access to large amounts of capital.
For this constraint, we are assuming the attacker has access to funds equal to half the circulating supply of Osmos. We've broken this down accordingly:
$$ capital = 0.5 * p_{Osmo} * Osmo_{circulating} $$$$ O_{circulating} = Osmo_{supply} - Osmo_{staked} + Osmo_{SFS} - Osmo_{LP} $$Note that SuperFluidStaked Osmos are counted twice (once as staked and once as LP'd), so we correct for that in our formula and implementation.
def calc_attack_capital(lcd_endpoint: Text) -> float:
"""Calculate the USD value of half the circulating Osmos.
Circulating Osmos are defined as Osmos that are not staked,
in the community pool, or in an LP."""
# Query number of Osmo in Supply
total_supply = float(requests.get(
f'{lcd_endpoint}/cosmos/bank/v1beta1/supply/uosmo', timeout=60
).json()['amount']['amount']) / 10 ** 6
# Query number of Osmos in CPool
cpool_balances = requests.get(
f'{lcd_endpoint}/cosmos/distribution/v1beta1/community_pool',
timeout=60).json()['pool']
cpool_osmos = float(next(
balance for balance in cpool_balances
if balance['denom'] == 'uosmo')['amount']) / 10 ** 6
# Query number of Osmos Staked
validators = requests.get(
f'{lcd_endpoint}/cosmos/staking/v1beta1/validators'
'?status=BOND_STATUS_BONDED&pagination.limit=150',
timeout=60).json()['validators']
staked_osmos = sum(
float(validator['tokens']) / 10 ** 6 for validator in validators)
sfs_osmos = float(requests.get(
f'{lcd_endpoint}/osmosis/superfluid/v1beta1/all_superfluid_delegations',
timeout=60).json()['total_delegations']) / 10 ** 6
# Query number of Osmos in LPs and the price of an Osmo.
osmo_liquidity = requests.get(
'https://api-osmosis.imperator.co/tokens/v2/OSMO',
timeout=60).json()[0]
osmo_price = osmo_liquidity['price']
osmos_in_lp = osmo_liquidity['liquidity'] / osmo_price
# Attack Capital = p_osmo * (O_supply - O_cpool - O_staked - O_lped)
return (
0.5 * osmo_price * (total_supply - cpool_osmos - staked_osmos
+ sfs_osmos - osmos_in_lp))
attack_capital = calc_attack_capital(OSMO_LCD_ENDPOINT)
print(f'${round(attack_capital, 2):,}')
$69,858,427.72
At the time of writing, the attack capital available is roughly 72 Million USD. What is the minimum liquidity required so a 72 million dollar swap cannot increase the spot price of an asset to 28.93 times its current value?
Looking at balancer pools and some similar work done on Uniswap/Balancer, we can use the following equation:
Let $ \delta_2 $ be the value of assets traded into the LP. Let $ R_2 $ be the value of assets within the LP. Let $ 1 + \epsilon $ be the desired change in price of asset 1.
$$ \delta_2 = R_2(\sqrt{1+\epsilon} - 1) $$In the cited works, this is a midpoint step between equations 7 and 8. Their work focuses on the cost of attack, whereas we're focused on solving for $ R_2 $ given a known attack capital ($ \delta_2 $ in this equation).
So lets do some rearranging of terms:
$$ \frac{\delta_2}{(\sqrt{1+\epsilon} - 1)} = R_2 $$$$ R_2 = \frac{\delta_2}{(\sqrt{1+\epsilon} - 1)} $$def calc_minimum_liquidity(delta: float, change_factor: float) -> float:
return delta / (math.sqrt(change_factor) - 1)
print(f'${round(calc_minimum_liquidity(attack_capital, 28.93), 2):,}')
$15,954,288.09
We can generalize the process by implementing a function to calculate our change factor:
def calc_change_factor(twap_factor: float, perc_time_no_arb: float) -> float:
"""Calculate the change in spot price (as a multiple of the current spot price).
Args:
twap_factor: Multiple of the current spot price that gives the desired TWAP.
Ex: spot price is $2, desired TWAP is $3. twap_factor is $3/$2 = 1.5
perc_time_no_arb: % of time (in the TWAP timeframe) without any arbitrage
Returns:
Manipulated change spot price as a factor of the current spot price.
"""
return pow(twap_factor, 1 / perc_time_no_arb)
print(f'${round(calc_minimum_liquidity(attack_capital, calc_change_factor(1.4, 0.10)), 2):,}')
$15,955,824.19
Note that this example used a 50/50 weighted LP, but it can be generalized for LPs w/ imbalanced weights.
twap_factors = [ i / 10 for i in range(11, 21, 1)]
minimum_liquidity = [
calc_minimum_liquidity(
attack_capital, calc_change_factor(twap_factor, 0.10))
for twap_factor in twap_factors]
fig = px.scatter(
y=twap_factors, x=minimum_liquidity, title='Impact on TWAP from Swapping 75 Million USD into LP',
labels={'x': 'Pool Liquidity (USD)',
'y': 'Price Impact (Multiple of Current TWAP)'})
fig.show()
Note the inverse relationship here. The shallower the LP, the larger the impact of a 75 Million USD swap. Deep liquidity mitigates the impact of large swaps, which lines up w/ how we think of slippage for retail and whale traders.
Using liquidation levels on Mars, we can see the following liquidation threshold for assets:
Reducing the TWAP of an asset in an LP corresponds to increasing the other assets price. I.e. decreasing the Atom:Osmo Ratio from 10:1 -> 7:1 (To 0.70) is the same as increasing thet Osmo:Atom ratio from 0.10:1 to ~1.43:1 (1 / 0.70).
Thus we can invert these values and get the corresponding values to use in our calc_minimum_liquidity function:
With these values, we can get liquidity values for our constraints:
for twap_factor in (1 / 0.61, 1 / 0.75, 1 / 0.65):
print(f'${round(calc_minimum_liquidity(attack_capital, calc_change_factor(twap_factor, 0.10)), 2):,}')
$6,444,518.62 $21,735,720.79 $9,169,541.14
Most of the changes since our last post have been behind the scenes improvements to the algorithm. The broaden of constraints allows us to account for a variety of deFi apps and their needs when assessing liquidity needs for an LP. The inclusion of TWAP Manipulation resistance is one such constraints.
Moving forward, the plan remains to form a working group and prepare inclusion of Concentrated Liquidity (Once its implementation is finalized and launched) to the algorithm. Along w/ monitoring liquidity levels and incentives to ensure the health and growth of the protocol.