Skip to content

How Tracking Works

The bot does not scan the whole blockchain hoping to find your wallets. It uses two complementary mechanisms — block-by-block polling for EVM chains and push webhooks for Solana — both tuned for low latency and the ability to track tens of thousands of wallets without breaking a sweat.

For Ethereum, BSC, Base, and Avalanche, a dedicated scanner per chain pulls each new block as it’s mined and inspects it.

The flow per block:

  1. Fetch the block with full transaction objects (eth_getBlockByNumber with the verbose flag).
  2. Match transactions against an in-memory index of every tracked wallet on that chain. The index is a Map keyed by lowercased addresses, so each lookup is O(1).
  3. Catch wallets that appear only in event logs — solver / intent protocols (Relay, CoW, UniswapX, 1inch Fusion) settle trades through routers, so a tracked wallet may not appear as tx.from or tx.to directly. A second pass queries Transfer-event logs by topic for those cases.
  4. Deduplicate against a rolling set of recently processed transactions (two generations of 2,500 entries each — old entries rotate out).
  5. Classify the transaction — bridge, swap, transfer, contract interaction, NFT, approval, contract deployment, failed.
  6. Hand off to the notification dispatcher, which checks every subscriber’s filters and queues the Telegram message.

Polling cadence is tied to each chain’s block time:

ChainBlock timeScan interval (typical)
Ethereum~12 s~13 s
BSC~3 s~3 s
Base~2 s~2 s
Avalanche~2 s~2 s

Backoff and circuit-breaker logic kicks in if an RPC starts misbehaving — the scanner will skip the log-detection pass after consecutive failures rather than spamming a struggling node.

Solana detection runs differently. Instead of polling slots, the bot subscribes tracked wallets with Helius and receives enhanced transaction events as soon as Helius indexes them. That means:

  • Push, not pull — alerts arrive faster on average than EVM
  • Helius parses the transaction for the bot, surfacing structured data (swap source, token transfers, NFT events) instead of raw logs
  • Source names — PUMP_FUN, PUMP_AMM, RAYDIUM, JUPITER, ORCA, METEORA, LIFINITY, PHOENIX, OPENBOOK, TENSOR, MAGIC_EDEN — are surfaced directly in alerts

The realistic delivery window from a transaction confirming on-chain to the alert landing in your Telegram:

ChainTypicalNotes
Ethereum15–20 sOne full poll cycle + processing
BSC5–10 sFaster blocks, faster scan
Base4–8 sFast blocks, low contention
Avalanche4–8 sSame as Base
Solana0.5–1.5 sPush from Helius, no poll wait

Add a few seconds in either direction during high RPC contention or Telegram rate-limit handling. If you’re seeing consistently longer delays, see I’m Not Getting Alerts.

What the bot looks at for every transaction

Section titled “What the bot looks at for every transaction”

When a tracked wallet shows up in a transaction, the bot pulls:

  • Method ID (the first 4 bytes of calldata) — resolved through a 3-tier signature service: hardcoded methods, then a database cache, then 4byte.directory, then the contract’s verified ABI on Etherscan if available
  • Receipt status — success or failure, with revert reason extraction when possible
  • Event logs — Transfer, Approval, Swap (V2/V3), Withdrawal (WETH unwraps), bridge-specific events, NFT mint/transfer
  • Internal traces — for native value movements that don’t appear in the top-level tx (debug_traceTransaction)
  • Token metadata + price — symbol, decimals, USD value via a price calculator (Moralis-backed for EVM)
  • Contract identification — checks against bridge registry, DEX router list, ERC-4337 entry points, solver protocol addresses

That information becomes the structured tx data that the alert formatter renders into a Telegram message. See Alert Anatomy for the field-by-field tour.

The single-instance design comfortably handles tens of thousands of tracked wallets per chain. The walletIndex Map lookup is constant-time, the dedupe set has a fixed two-generation rotation cap, and chunked log queries adaptively split when an RPC complains about response size.