gm frens
Long time, no talk! We haven’t been slacking though. On the contrary! We’ve never been more busy than in the last few months. Our last diary was a deep dive into dApp end-to-end testing using Playwright and Foundry/Anvil. This entry will be more of an overview of our efforts in the last few months. There’s a lot to highlight! Let’s rewind a bit first.
The challenge back then was Polygon support. We can happily say that support for the Polygon chain landed earlier this year and things have been chugging along without a hitch. We’re busy integrating new partners and games, the first being Toshimon which launched not so long ago. Alongside, some users are already merrily lending out assets on Polygon and collections are starting to get integrated. Stay tuned for news regarding this!
Castle Crush still remains our most used integration. We love them. The players, aside from bringing incredible value to our platform, provide us with key insights about our UX and desired features. A little while ago the integration surpassed over 100k NFTs rented since its inception. Having quality rentals available is a huge boon for new players, allowing them to utilize more powerful items from the get-go. We believe this is key for any game having digital assets at their core.
Lets talk shop
In the first dev diary I laid out our stack. We utilize Foundry as our contract development tooling. The Graph indexes on-chain events so we can easily pull rentals data. This is all fetched through an Express backend, which is consequently consumed by our NextJS front end. In both back- and front end we utilize our SDK to easily interface with our contracts.
Full Turbo
We managed separate repositories for these things. This was becoming a hindrance though. Essentially we’re building a monolith microservice style — a pretty popular pattern nowadays. Having separate repos created hurdles for synchronizing releases, integration testing, and also developer workflow. Since our team has a lot of full-stack experience, having these things in separate repos makes for unnecessary context switches and other inconveniences.
For our front end, we made the decision to start building in Turborepo. This could be regarded as a weird flex since UI & data fetching were co-located anyways. Still, we found a strong argument for leveraging workspaces to start modularizing our code. There was a lot of legacy to clear away and enforcing a paradigm where modules could be relegated to packages/ seemed like a sensible path to take. The second argument was that, with our full stack experience, we could at some point migrate our API into the repo to co-locate changes across the stack better, in single PRs. This move would open the gate to properly integration/E2E testing the stack on these co-located changesets, without requiring separate API deploys or being an in-the-know state with regards to failing CI.
The third and final argument was of course that, until we convinced everybody to move the API into Turborepo as well, we could get our meme game on and troll our backend.
And now for something completely different
Dookey Dash is a game that took the NFT space by storm. Only Bored/Mutant Ape holders could play by claiming a Sewer Pass. The game is designed as an endless runner. The farther you get, the more points you would amass on your Sewer Pass. Sewer Passes containing points could consequently be traded in for Power Sources. Most likely these NFTs will be a key part of a future event happening in the BAYC universe.
This whole setup offered an interesting conundrum. Not every Sewer Pass holder had the time or skills to fully compete in this event. Naz, our CTO, quickly found out that Sewer Passes could be delegated to other wallets — without losing ownership — through delegate.cash. This would allow for non-holders to experience the game as well. Since we’re in the rentals space, we decided to ape into building a small platform abstracting this process: dookey.renft.io Gamers could request Sewer Pass access and holders could easily delegate or revoke these.
We had to deliver fast though, because the Dookey Dash event would only run for a few weeks. Even though this side project wasn’t mission-critical for reNFT, the team felt a huge responsibility in delivering. We made mistakes, most along the lines of priority and scope. We almost, kind-of missed our self-imposed deadline, but we were able to release it with about a week left in the event!
Reflecting on this, it was absolutely fantastic to allow our team to sandbox a project like this. High stakes (I mean, it costs us not working on our core platform), high reward (putting reNFT on the map), and high pressure (limited time). The experience allowed the team to reflect on how we handle these high-pressure situations. I won’t share all our learnings, but I think the most critical one we’ve learned as a team is that especially when the pressure is on:
- It’s paramount that there are clear goals.
- Everybody is tasked with guarding these, and
- Work scope must only decrease. It may never increase.
Only allowing scope decreases aims to optimize value delivered vs. effort spent. It doesn’t mean that things will be done in half. Rather, it’s a method of making chunks of work more granular. Naturally, this will increase the number of tasks to work. You could say this borders on creative accounting, but having more tasks is great! It makes it easier to separate weeds from the chaff with regard to a project’s goals. And form a code perspective, it makes for smaller PRs, which leads to faster reviews, which leads to faster shipping.
Back to our regular program
Now, where was I? Right. Monorepo.
We found this choice pays off. Our tooling is more consistent, having shared configuration, linting & formatting, and tests across the complete stack. We could now co-locate changes and synchronize deployments, allowing us to execute features and refactors a lot more efficiently.
Also, running yarn dev and spinning up a complete stack gives developers a god mode.
A feature that exemplified our monorepo approach was our refactoring of authentication. Auth was shoehorned into our API and front end. I’ll spare you the details but let’s say that try/catch isn’t a great construct to use for control flow.
Since we’ve migrated out wallet handling to RainbowKit, it made the most sense to refactor this part to leverage RainbowKit’s Sign-In With Ethereum integration built on top of NextAuth. Implementation seemed pretty easy. Unfortunately, our data source calling code was all over the place. This isn’t a big problem when you’re using cookies and just a handful of domains. It becomes a bit more iffy when you’d need to accept *.vercel.com preview domains. JWT it is. Getting this done required a two-step approach.
- Get a developer in a strange mood and let them have some fun tidying up our data-fetching methods. This was a gnarly exercise but ended up simplifying our data source entry points on the front end, co-located data fetching code, and shaved off ~700 LOC + axios. (Seriously. Stop using axios on the front end.)
- Implement RainbowKit’s NextAuth integration on the front end, tacking the Authorization header onto our fetch() when available. Let the API consume either the cookie or JWT. This, again, simplified and co-located our auth code and let us shave off a library or 3.
Aside from yielding obvious DevEx benefits, this fixed a gnarly UX issue on our dApp. Connected wallet sessions — believe it or not — are now persistent across page loads.
Getting the pull request in took a lot less effort as well. We didn’t have to coordinate PRs across repositories, code was E2E tested in CI, and could subsequently be released synchronously. What a rush!
A new foe landing page has appeared!
The last thing I want to touch on in this entry. If you’re a frequent visitor of our dApp (market.renft.io, FYI), you probably passed by our updated homepage. Instead of being violently thrown into our rentals listings (now on /market), you’re now provided a soft landing on our platform. An actual landing page!
Our landing page showcases collections integrated with us and explains our value proposition for games, gamers, and other potential partners.
Implementation seemed pretty straightforward. It provided us a chance to “greenfield” some of the components in our application, implementing these idiomatically. We did, and we’re absolutely happy with how everything turned out. But the total implementation took a bit longer than we liked.
At first, I was a bit dumbfounded about this. The work was split up and allocated with surgical precision. Developer availability and experience were taken into account. Project runtime seemed reasonable. On paper, it looked like the perfect project. What gives? There had to be a lesson here.
One thing engineers love to point at in cases like this is Hofstadter’s Law:
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
In all fairness, I didn’t take this into account. No one can. And of course, we faced some (unplanned) impediments. But this hardly counts as a lesson. By definition, Hofstadter’s Law doesn’t allow you to learn this.
I took a detailed look at our process and progress. Some components took longer to build than expected, while others were completed in a jiffy. At first glance points for effort seemed to be allocated reasonably well. But after scrutinizing some of our tasks and corresponding PRs a pattern emerged. Some of our relatively simple components took relatively more effort than planned while more complex components took less. On its face, implementing a <Heading /> is simple. In practice, what was implemented, contained component polymorphism (e.g. allowing any DOM element), every possible text style under the sun, and then some. And this wasn’t the only instance exhibiting this pattern. Our more complex components, on the other hand, seemed to be implemented in a more straightforward fashion.
It all reminded me of another classic quote from our boy Knuth:
Premature optimization is the root of all evil.
Yet here we were. Doing precisely that. On multiple levels even. Let me explain.
In the planning stage, a frontend developer “slices” a mock-up into meaningful blocks. These can be sections, components, elements, or whatever you may call it. You can be more or less granular with this where reuse is oftentimes the indicator. In drafting the tasks surrounding these, I was pretty granular in the approach. Complexity through simplicity. Getting the core, “simple”, components done first would yield us less effort on the more complex components. This is usually the case. What I didn’t account for was that, by making things granular, I accidentally introduced a pitfall where implementation would gravitate towards sometimes absurdly generic interfaces. We, lacking complete foresight, couldn’t judge the total merit of this so we, a pragmatic bunch, erred on the side of LGTM. Unfortunately — you can probably already smell it — this introduced some less-than-flexible interfaces and components.
Luckily we’re a pragmatic bunch, and soon enough patterns emerged which did allow for flexible extension. In this, I’m always reminded of an adage from Cawfree (hey teammate ). It’s something along the lines of “the code will tell us.” And this is so true. In all of my years in tech, refactoring, abstracting, and/or generalizing after the fact is almost always the path of least resistance. A redesign is always better because it contains fewer assumptions.
The code will tell us.
That was our lesson. It’s not the first time we’ve learned this. It’s probably not the last time either. Each time though will give you better insight in how to prevent or deal with this better in the future. And it doesn’t solely concern code. Planning just as much. I believe that if I’d taken a less granular approach in drafting the tasks, this wouldn’t have become as big of a pitfall — if any.
Closing off
Wow, this has become a pretty lengthy one. Let me end by giving some insight regarding things to come. I haven’t mentioned our GraphQL Gateway at all. Not because it has been discontinued. On the contrary, the initial implementation sits snugly in our monorepo. Getting it production ready however required us to take a hard look at our existing infrastructure, having a lot of manual circuit breakers still. This is something we desperately wanted to get rid of. The Gateway will be public-facing, so we first have to fiddle with it ourselves, write the right abstractions, and ready the docs.
The work is underway, but that will be a story for next time.
Keep learning! Keep building!
-Rom