In my previous posts, I have been writing some technical deep dives into core aspects of the Endgame protocol, such as hook implementations and rental transactions. Now, I want to take a step back and give an overview of the novel smart contract architecture that the whole protocol is based on.
Motivation
There is no one design that works best for a smart contract protocol. For Endgame, I had a few considerations in mind before any actual code was written.
Knowing that the protocol would have to integrate with both Seaport and Safe smart contracts, I wanted a design that would reduce the surface area for bugs as much as possible, given the difficulty of testing a protocol that integrates with another. For me, this meant a few things: an airtight admin privilege design, well-divided contracts each with as few responsibilities as possible, and writing well-commented code.
Additionally, as with any protocol, it was hard to tell how Endgame was going to evolve over time. Given that smart contracts can’t be afforded the rapid release cycles than an API or a front-end can, it was important that the protocol architecture be flexible and modular.
Finally, a given requirement was that the protocol would be deployed to ETH mainnet. This meant the protocol would have to be considerate of gas consumption so as to not price-out users from interacting with the protocol due to excessive gas costs.
The Framework
I found everything I was looking for, and then some, in the Default Framework design. This protocol architecture sets itself apart by shifting the focus of a protocol away from organizing contracts around processes, and toward organizing contracts around data models.
As stated in the Default Framework repository, the goal is to take a protocol from looking something like this:
and turn it into something to like this:
The key change needed to design protocol like this revolves around the idea that some contracts should be external-facing (stateless policies) and some should be internal-facing (stateful modules).
This design limits the maximum number of nested dependencies for a contract to 1 while also providing guarantees on which contracts are allowed to modify specific storage.
I won’t go over all the details of the Default Framework here but for more information, you can read about it on their Github. For the rest of this post, I will be covering how the Endgame architecture was designed while using this framework as a guide.
Kernel Layer
The Kernel is a contract whose responsibility is to orchestrate the interaction between user-facing policy contracts and stateful module contracts. It maintains the bookkeeping needed to ensure that modules can only be accessed by registered policies. Additionally, the Kernel will manage admin roles for operations such as adding and removing policies or modules from the protocol.
The key portion of the kernel’s administrative logic lies within the permissioned
modifier.
modifier permissioned() {
if (!kernel.modulePermissions(KEYCODE(), Policy(msg.sender), msg.sig)) {
revert Errors.Module_PolicyNotAuthorized(msg.sender);
}
_;
}
The Kernel keeps track of each module using a unique key-code assigned to it. Using this key-code, it can check if the msg.sender
has previously registered with the protocol in relation to the function selector that it is trying to access. This process is the key to maintaining the separation of business logic and contract storage. As such, each function that modifies storage on a module must implement this modifier to function properly.
Modules
Modules are considered the “back-end” of the protocol. They hold the global storage of the protocol and cannot be directly interacted with by any EOA or contract outside of the protocol itself. For any contract to be allowed to modify storage on a module, they must be explicitly allowed to access a particular function selector on the module. This design makes storage access an opt-in feature, and prevents any unintended side-effect where contracts can modify storage that was unintended.
Endgame uses two modules: a Storage Module which contains data relating to rentals that are active in the protocol, and an Escrow Module which holds ERC20 token payments until they are ready to be collected.
For example, a function on the Storage Module looks like this:
function addRentals(
bytes32 orderHash,
RentalAssetUpdate[] memory rentalAssetUpdates
) external onlyByProxy permissioned {
// ... logic goes here
}
As stated before, each function that modifies storage on the module will have a permissioned
modifier. This modifier makes a call to the kernel to see if the address calling this function has registered its intent with the protocol to add rentals to storage.
Policies
Policies are user-facing contracts which do not maintain storage of their own. Their main purpose is to act as windows into the protocol. For example, to create a rental with Endgame, a user would interact with the Create Policy. In reality, it’s a bit more complicated than that since Seaport is the entity that interacts with the Create policy but the idea is the same. For stopping a rental, the user interacts with the Stop Policy. For changes to the protocol itself such as adding or removing another policy, the user would interact with the Admin policy (if they are an admin, of course).
A key benefit here is the ability to slice up the protocol into clear sections of business logic. The Stop Policy is not concerned with how the Create Policy operates, and vice-versa. They each only care about interfacing with the data models that are needed to carry about their own policy business logic.
We can now turn to how a policy interacts with a module contract. We have seen that a state-modifying module function cannot be accessed unless the caller has previously registered with the kernel about the function selectors it intends to use. So, how does a policy contract register its intent?
Continuing our example from before, we can show how the Create Policy intends to update protocol storage about new rentals that have been processed.
/**
* @notice Upon policy activation, permissions are requested from the kernel to access
* particular keycode <> function selector pairs. Once these permissions are
* granted, they do not change and can only be revoked when the policy is
* deactivated by the kernel.
*
* @return requests Array of keycode <> function selector pairs which represent
* permissions for the policy.
*/
function requestPermissions()
external
view
override
onlyKernel
returns (Permissions[] memory requests)
{
requests = new Permissions[](2);
requests[0] = Permissions(toKeycode("STORE"), STORE.addRentals.selector);
// ... requests continue on
}
Each policy contract inherits an abstract Policy
class and must implement the requestPermissions
function if it wishes to modify protocol storage. An array of permissions is built up which specifies the key-code of the module contract and the function selector to access.
When a new policy is added to the protocol, the Kernel will execute requestPermissions
on the policy and store the permissions which will be used in the future when validating calls from the policy to a module.
Packages
By splitting up the protocol business logic into their own policy contracts, it’s inevitable that at some point the policy contracts would want to share logic or access the same helper functions.
For this, the protocol utilizes a series of packages. Succinctly put, these are sharable and inheritable abstract contracts with immutable state that can enhance the functionality of a policy contract.
For example, the Signer Package contains logic related to signed payloads and signature verification when creating or stopping rentals. The functions in this package are needed by both the Create Policy and the Stop Policy, so rather than duplicate the functionality, each policy can inherit this package contract and use its functions that way.
Reducing Storage Costs
Putting all the data in contract storage needed to keep track of a single rental is not cheap. Each token involved in the order, all its hook data, and then the rental metadata itself need to be tracked. It would take up a lot of storage slots to store a single RentalOrder
struct:
struct RentalOrder {
bytes32 seaportOrderHash;
Item[] items;
Hook[] hooks;
OrderType orderType;
address lender;
address renter;
address rentalWallet;
uint256 startTimestamp;
uint256 endTimestamp;
}
// A rental order has an array of hooks
struct Hook {
// The hook contract.
address target;
// Index of the item in the order to apply the hook to.
uint256 itemIndex;
// Any extra data that the hook will need.
bytes extraData;
}
// A rental order has an array of items
struct Item {
ItemType itemType;
SettleTo settleTo;
address token;
uint256 amount;
uint256 identifier;
}
Thats a lot of SSTOREs!
To avoid all this overhead when it comes to storing rental data, the protocol opts for hashing the RentalOrder
struct and storing that instead. This brings the cost of storing a rental order down to a single SSTORE.
As with any design, there are always trade-offs. The issue this brings is that by storing the hash, you lose access to the data that the rental order was supposed to store in the first place. To remedy this, upon each rental the protocol will emit a RentalOrderStarted
event with all the data necessary to reconstruct a RentalOrder
. That way, the data doesn’t need to be put in protocol storage but can still be accessible.
event RentalOrderStarted(
bytes32 orderHash,
bytes emittedExtraData,
bytes32 seaportOrderHash,
Item[] items,
Hook[] hooks,
OrderType orderType,
address indexed lender,
address indexed renter,
address rentalWallet,
uint256 startTimestamp,
uint256 endTimestamp
);
Conclusion
Having now gone through an audit using Code4rena, I feel confident that the data model methodology for designing a smart contract protocol was the correct approach to take.
During our audit, there wasn’t a single reported issue related to execution of an unauthorized function because of some misconfiguration of admin privileges. Additionally, the design lends itself to being highly configurable, able to adapt to any new contracts that we may want to fold into the protocol later on.
Endgame is still in its early stages, and I am excited to see how the protocol evolves from here. You can view the implementation of the protocol here, and if you have any questions on the protocol architecture, feel free to reach out to myself at [email protected] or on x.com.