Share this article

Kyber Network Advises Liquidity Providers to Withdraw Funds Amid Vulnerability, Token Drops 2%

Kyber's Elastic product's total value locked has plunged to $61 million from $108 million a day earlier.

Updated May 9, 2023, 4:12 a.m. Published Apr 17, 2023, 12:23 p.m.
Kyber Network warns liquidity providers to withdraw funds. (Towfiqu Barbhuiya/Unsplash)
Kyber Network warns liquidity providers to withdraw funds. (Towfiqu Barbhuiya/Unsplash)

Decentralized-finance protocol Kyber Network has advised liquidity providers on its Elastic product to withdraw funds after it found a potential vulnerability.

The protocol confirmed the potential flaw in a tweet while noting that no funds had been lost and that the KyberSwap Classic product is unaffected. The protocol's native token, KNC, dropped by 2% following the tweet.

STORY CONTINUES BELOW
Don't miss another story.Subscribe to the Crypto Daybook Americas Newsletter today. See all newsletters

The Elastic product had $108 million in total value locked on Sunday, but that figure has dropped to $61 million according to DefiLlama.

Vulnerabilities and exploits have been rife across the DeFi ecosystem this year, with Euler Finance losing almost $200 million in an attack last month.

Kyber Network was hit with a $265,000 exploit in 2022.

The protocol confirmed that investigations are ongoing.


More For You

Specialized AI detects 92% of real-world DeFi exploits

hackers (Modified by CoinDesk)

New research claims specialized AI dramatically outperforms general-purpose models at detecting exploited DeFi vulnerabilities.

What to know:

  • A purpose-built AI security agent detected vulnerabilities in 92% of 90 exploited DeFi contracts ($96.8 million in exploit value), compared with 34% and $7.5 million for a baseline GPT-5.1-based coding agent running on the same underlying model.
  • The gap came from domain-specific security methodology layered on top of the model, not differences in core AI capability, according to the report.
  • The findings come as prior research from Anthropic and OpenAI shows AI agents can execute end-to-end smart contract exploits at low cost, accelerating concerns that offensive AI capabilities are scaling faster than defensive adoption.