How this works:
First part remains the same: (d × t × sqrt(c)) × (log₁₀(d)/20) This calculates the initial burn amount based on daily activity
Second part: (T × Z - CB) × (1 - CB/(T × Z))
(T × Z - CB) represents remaining tokens that can be burned
(1 - CB/(T × Z)) is a scaling factor that approaches 0 as we get closer to the cap
As CB approaches T × Z, this part of equation naturally decreases the burn rate
min[] function ensures we always take the smaller value between:
The activity-based burn calculation
The cap-adjusted maximum possible burn
RESEARCH DATA COMBUSTION FRAMEWORK [RDCF]
Letting Research Activity Shape Our Token Supply
We're taking an innovative approach to our token economics by implementing an algorithmic burning mechanism that directly reflects the actual usage and growth of our research ecosystem. Here's why this matters:
Traditional token supplies are often arbitrary - set once at launch without truly knowing what the ecosystem will need. Instead of guessing, we're letting real research activity determine the optimal supply over time.
Our burning mechanism is uniquely tied to three key metrics that represent the heart of our research platform:
The number of datasets our community contributes
The computational analysis performed on each dataset
The rewards distributed for data contributions
This creates a natural balance where token burning is proportional to platform research activity. As more users contribute data, and our analysis capabilities grow, the burn rate adjusts algorithmically - but with built-in safeguards to prevent excessive burns.
Think of it as letting the token supply naturally evolve with our ecosystem's actual needs. The more our platform is used for its core purpose - advancing decentralized science - the more the token supply adjusts through this algorithmic process.
This isn't just about token economics - it's about aligning our governance token with genuine research activity. Every burn represents real contributions to science, real computational work, and real community engagement. It's DeSci in action, shaping our token's future.
Variables:
d = number of datasets per day
t = tokens paid per dataset (specific per research project)
c = computations per dataset (specific per research project)
T = total supply
B = daily burn amount
Z = maximum burn cap
Proposed equation for daily burn calculation:
B = (d × t × sqrt(c)) × (log₁₀(d)/20)
Let us explain how it works:
Base Impact: (d × t × sqrt(c))
This considers your daily operations
Using sqrt(c) instead of c directly helps flatten the impact of computations
Flattening Factor: (log₁₀(d)/20)
Logarithmic scaling ensures burn rate doesn't grow linearly with dataset volume
Division by 20 helps control the maximum burn rate
To enforce the maximum burn cap:
Track cumulative burned tokens (CB)
Before each burn, verify: CB + B ≤ T × Z
If the condition fails, adjust B to: B = (T × Z) - CB
This approach creates a self-adjusting system where:
Burn rate increases sub-linearly with activity
Higher dataset volumes don't cause excessive burns
System naturally slows down as it approaches the maximum burn cap
Daily burns are predictable and manageable
After integrating the cap limit directly into the daily burn calculation to create a single equation that automatically adjusts based on how close we are to the total burn cap the final equation is:
B = min[(d × t × sqrt(c)) × (log₁₀(d)/20), (T × Z - CB) × (1 - CB/(T × Z))]