Solidity EIP Gotchas

Common gotchas when implementing EIPs in Solidity

TLDR:

Common pitfalls when implementing EIPs (Ethereum Improvement Proposals) in Solidity and how to verify real EIP compliance.

Key points:

  • EIPs such as ERC‑20, ERC‑721, ERC‑1155, ERC‑4626, EIP‑712, and EIP‑2612 are not part of pragma solidity or the Solidity compiler.
  • Developers signal EIP usage informally via comments, natspec, inheritance from base contracts
  • Interface and ABI checks are necessary but not sufficient.
  • Robust EIP compliance requires predefined test suites, unit tests, integration tests, and fuzzing
  • Bytecode and ABI comparison scripts are less error‑prone than manual spec‑by‑spec reviews.
  • Using popular libraries like OpenZeppelin reduces risk but does not guarantee full EIP correctness or security.
  • Undocumented partial implementations can break integrators and create vulnerabilities.
  • Useful public resource for exploring EIP relationships is eip.tools
  • Many EIPs depend on other EIPs. Proper audits must trace and verify these dependency chains.
  • Upgradeable contracts introduce new EIP risks.
  • Chain differences can impact EIP behavior.

EIP compliance in Solidity requires combined use of ABI/interface analysis, ERC‑165 inspection, reusable EIP test suites, fuzzing with tools like Foundry or Echidna, and cautious use of libraries like OpenZeppelin.


Some Solidity devs believe they “follow the standard.” But in practice, a surprising number are not fully compliant. And it is not easy to be, due:

  • EIP compliance is not obvious.
  • It is not encoded in pragma.
  • It is not guaranteed by using a popular library.
  • And it is rarely verified end to end.

We walk through some common gotchas (or personal insights:) that show up in audits and incident postmortems, and how to avoid them.


Gotcha 1: EIPs are not part of pragma solidity

Many developers implicitly assume:

  • “I use a recent Solidity version.”
  • “Therefore newer EIPs are kind of built in.”

They are not.

pragma solidity ^0.8.20; for ex. tells you:

  • compiler version range
  • by extension, which EVM features and language features are available

It does not tell you:

  • which token or protocol standards are implemented
  • whether ERC‑20 / ERC‑721 / ERC‑4626 / permit / EIP‑712 are present or correct

EIPs like ERC‑20, ERC‑721, ERC‑1155, ERC‑4626 are application‑level conventions. They live in your code and tests, not in the compiler. The compiler may internally account for some protocol‑level EIPs that change the EVM itself, but that is separate from application‑level EIPs like ERC‑20 or ERC‑4626. Those are never ‘auto‑enabled’ by pragma.

Practical takeaway:

  • 2 types of EIPs exist - application‑level EIPs and language / compiler EIPs (EVM-level like opcodes, features etc)
  • never infer “EIP support” from pragma
  • always determine standards from code, ABI, and behavior

Gotcha 2: EIP signaling is informal and inconsistent

How do developers tell each other “this implements EIP‑XYZ”?

Common patterns:

  • comments and natspec
    • /// @dev Implements EIP‑2612 permit
    • /// @notice ERC‑4626 compatible vault
  • inheritance
    • contract Token is ERC20, ERC20Permit
    • contract NFT is ERC721, ERC721Enumerable
  • ERC‑165 supportsInterface
    • returning true for type(IERC721).interfaceId, type(IERC2981).interfaceId, etc.

The problem:

  • none of these are enforced by tooling or the protocol
  • they can be wrong, stale, or misleading
  • many contracts are only “EIP-like” or “ERC‑X‑ish”

Sometimes contracts claim an interface ID via ERC‑165 without actually implementing the full behavior, either by mistake or because they consider themselves “good enough.” This reinforces that supportsInterface is metadata, not proof.

Practical takeaway:

  • treat all EIP claims as hints, not guarantees
  • confirm with interface checks and tests
  • in your own codebase, be explicit and consistent about which EIPs you claim to support

Gotcha 3: Interface matching is necessary, but not sufficient

Checking ABI or Solidity interfaces is table stakes.

  • does the contract expose all required functions with exact signatures?
  • are required events present?
  • for ERC‑165‑based standards, does supportsInterface return the right values?

This filters out obvious non‑compliance. It does not prove correctness.

Examples:

  • ERC‑20 token with correct ABI but broken allowance semantics
  • ERC‑4626 vault with all functions present but incorrect rounding or invariant violations
  • ERC‑721 that never reverts for invalid token IDs, breaking marketplace assumptions

Practical takeaway:

  • always run automated interface checks, from ABI or source.
  • then assume you still know nothing about behavioral correctness :)

Gotcha 4: Behavior must be tested, not guessed

The real EIP spec is not just a function list. It is the behavior around those functions. The ficntions are just the vehicle.

You need:

  • unit tests for each rule in the EIP
  • (potentially) integration tests for how the standard interacts with other contracts
  • fuzzing or property tests around invariants

Good practice (Alpha for auditors probably):

  • maintain a reusable test suite per standard in your repo, like for
    • ERC‑20 conformance tests
    • ERC‑721 conformance tests
    • ERC‑4626 conformance tests
  • run them against every implementation, customized where needed
  • fuzz key invariants

Examples of invariants:

  • for ERC‑20
    • sum(balances) == totalSupply()
    • transfer and transferFrom conserve total supply
    • approve and transferFrom must not introduce balance changes or side effects that contradict the ERC‑20 model (for example, silently moving tokens or mutating unrelated balances).
  • for ERC‑4626
    • share and asset conversions stay consistent across deposits/withdraws
    • rounding follows the EIP rules
    • maxDeposit, maxWithdraw, maxRedeem, maxMint make sense under stress

Be careful not to treat legacy mainnet tokens as the ground truth. Many shipped before specs solidified and contain quirks that should not be copied into new implementations.

Practical takeaway:

  • treat EIP test suites as first‑class artifacts
  • re‑use them across projects and during audits

Gotcha 5: Bytecode and ABI checks beat manual spec diffing

Spec‑by‑spec manual review scales poorly and is error‑prone. Especially when you audit many contracts and variants.

A better pattern:

  • derive expected ABI for a given EIP from a canonical interface
  • extract the actual ABI from:
    • source (via compiler)
    • deployed bytecode (via metadata, where available)
  • compare signatures automatically

If metadata is missing and the contract is not verified, ABI reconstruction from raw bytecode is possible but more heuristic and less reliable.

For ERC‑165:

  • automatically query supportsInterface for known interface IDs
  • flag inconsistencies between behavior and declaration

This is where tools and scripts shine:

  • they do not get tired
  • they catch subtle signature mismatches
  • they act as a first filter before deeper review

Practical takeaway:

  • invest in small scripts for ABI / bytecode interface comparison
  • run them in CI and during audits
  • manually review only what automation has already filtered

OpenZeppelin and friends are excellent baselines. They are not a proof that your final system is EIP‑correct or secure.

Common failure modes:

  • extending an OZ base contract with extra logic that breaks assumptions
  • overriding hooks without replicating the expected behavior
  • mixing multiple extensions in ways the library authors did not anticipate
  • assuming “OZ = fully audited => our use case is safe by default”

Important nuance:

  • OZ gives you validated building blocks for a standard implementation
  • it does not validate your wiring, extensions, or configuration
  • you can absolutely ship a vulnerable contract on top of a correct base

In practice, most bugs appear when teams fork, heavily modify, or mix library contracts, not in the vanilla library implementations themselves. What those libs provide is fully audited complete instance of EIP. Often add guards. But they cant guarantee the quality of an implementation using them.

Practical takeaway:

  • use libraries to reduce surface area, not to outsource all rationale behind custom implementation
  • treat overrides as custom logic
  • still run EIP test suites and fuzzing on your final implementation

Gotcha 7: Partial EIP implementations can be dangerous

Sometimes teams intentionally omit parts of an EIP:

  • for gas saving
  • for simplified UX
  • for niche protocol assumptions

This can be fine if:

  • the contract does not claim to be fully EIP‑compliant
  • integrators know they are dealing with a non‑standard variant
  • all external assumptions are carefully validated

It becomes dangerous when:

  • the contract markets itself as “ERC‑20” or “ERC‑4626”
  • missing functions or events break downstream integrations
  • invariants assumed by other contracts no longer hold

If the contract is only used inside a tightly controlled protocol and never exposed as a generic ERC‑20/721/4626 to external integrators, partial specs can be acceptable. Once you expect third‑party tooling and protocols to interact with it, deviations become landmines.

Practical takeaway:

  • if you deviate from the spec, document it loudly and precisely
  • do not advertise full compliance if behavior is intentionally reduced
  • re‑evaluate every integration assumption against your actual behavior

Gotcha 8: Limited ecosystem tooling for full EIP detection

Fully automatic detection of “this is a correct EIP‑XYZ implementation” is hard:

  • standards have behavioral and contextual requirements
  • some EIPs are compositional or pattern‑based rather than strict ABIs
  • there is subjectivity in what counts as “good enough” for a given use case

As a result:

  • most teams and auditors rely on internal scripts + test harnesses (testing setup)
  • public tooling is relatively limited and narrow in scope

One useful public resource:

  • eip.tools
    • helps analyze EIPs, relationships, references
    • surfaces dependencies and connections between standards

eip.tools is useful for exploring the EIP landscape and dependencies. It does not automatically prove that any specific contract complies with a standard.

Practical takeaway:

  • do not expect one fully trusted scanner to certify your contracts
  • combine public tools with your own checks and tests
  • keep an eye on how tools like eip.tools evolve

Gotcha 9: EIP dependency chains are easy to miss

Many EIPs depend on others, explicitly or implicitly.

Examples:

  • EIP‑2612 (permit) relies on EIP‑712 (typed data signatures)
  • many NFT royalty schemes use ERC‑721 or ERC‑1155 plus EIP‑2981
  • proxy patterns (EIP‑1967, UUPS) rely on subtle storage and upgrade assumptions

If you only check the top‑level EIP:

  • you might miss incorrect or partial implementation of dependencies
  • you might miss assumptions those dependencies make about how they are used

Some of these dependencies are explicit in the EIP text; others are de facto dependencies emerging from common patterns (for example, permit implementations based on EIP‑712 style domain separators). Either way, they deserve explicit review.

Practical takeaway:

  • build a small dependency map for standards used in your system
  • review and test from the bottom of that tree upwards

Gotcha 10: Upgradeable contracts introduce new EIP risks

Many EIPs assume a simple, non‑proxy deployment model. When you add proxies (EIP‑1967, UUPS, beacon), you introduce:

  • storage layout constraints
  • initialization ordering
  • upgrade access control

Mismanaging upgrades can violate EIP invariants post‑deployment. A vault that was a correct ERC‑4626 yesterday can become non‑compliant tomorrow after a bad upgrade. Treat upgrade logic and governance as part of your EIP audit surface.

Practical takeaway:

  • if you wrap an EIP implementation in an upgradeable proxy, you inherit another class of risks
  • audit the upgrade mechanism and governance as thoroughly as the EIP implementation itself

Gotcha 11: Chain differences can impact EIP behavior

Not all EVM chains behave identically. Gas schedules, precompiles, and fork timings differ. An EIP implementation tested only on Ethereum mainnet assumptions may have different risk profiles on L2s or alternative EVM chains.

Practical takeaway:

  • for high‑impact systems, test and review per target chain might make sense
  • be aware that an EIP‑compliant contract on one chain might behave unexpectedly on another

Gotcha 12: “Works on mainnet” is not an audit stamp

The most dangerous argument:

  • “this pattern is everywhere”
  • “this token already has billions in TVL”
  • “this exact code sits in production on chain X”

None of that guarantees EIP correctness or safety. It only proves that:

  • the code has not yet failed catastrophically in a visible way
  • or that nobody has published an exploit
  • or that nobody has exploited a published exploit.

Some ERC‑20 or ERC‑721 tokens in production are:

  • only partially compliant
  • silently broken for some edge cases
  • relying on integrators having patched around their quirks

Many DeFi protocols have accumulated layers of defensive glue code to handle broken or non‑standard tokens. Copying those workarounds without understanding the underlying issues just spreads legacy bugs to new systems.

Practical takeaway:

  • still run your own checks and tests, especially if you fork code
You might get some alpha here
Built with Hugo
Theme Stack designed by Jimmy