Skip to content

futuro/its-all-gravie

Repository files navigation

It’s All Gravie!

A job interview project posing as a Library you can borrow games from.

Live site: https://its-all-gravie.justenough.software/

Overview

This repo is my response to the prompt posed in this repo by Gravie Inc. In this readme I will cover the following topics:

Installation

Before I get into the what and why of the higher-level choices I made, I’m going to cover how to get a local development version up and running, in the event you want to see it in action right away.

Dependencies

This project relies on at least NodeJS v6.0.0+ (because Shadow CLJS requires that), though I used v20.5.1, and a Java SDK (again because Shadow CLJS requires it), at least version 11, though I used OpenJDK 20.0.1.

How you install each of those will be dependent upon your OS and particular setup, but likely your OS’ package manager will have a usable version available.

Running it locally

There are two parts to this app (described in more detail in later sections), and they are Shadow CLJS and Wrangler. Shadow handles compiling the CLJS files into JS files suitable for the browser, and Wrangler handles being our backend.

For Shadow CLJS, you can either start it from your editor using whatever ClojureScript editor integration you’ve got, or you can use an npm script to start the server. If you use Emacs, and have installed Cider, you can likely use cider-jack-in-cljs to start the Shadow server and connect to it.

Both Shadow and Wrangler start long running processes, so, if starting both with npm, run the following commands in two separate terminals:

Shadow CLJS
npm run watch
Wrangler
npm run local-dev

Then visit the URL that Wrangler gives you, likely http://127.0.0.1:8788/. Depending on whether Shadow has finished compiling the CLJS files or not you might have a short delay, but soon enough you’ll be staring at the site running locally!

Seeing it live from a different machine

If you don’t want to do all of that, but do want to see the app live, you can go to https://its-all-gravie.justenough.software/, which, assuming it’s still running by the time you come across this README, is a site running on Cloudflare’s Pages technology.

Triumphs

This repo, and this doc, are meant to impress the folks at Gravie, and to move me forward in their interview process, so let’s put some triumphs right up front!

  • I learned re-frame from scratch in about a day.
  • I learned just enough HTML/CSS/JS to adapt existing web tech, without getting caught in any serious complexity traps.
  • I debugged some pretty gnarly issues in tech I was largely unfamiliar with, avoiding major delays in getting the project out the door.
  • I built all of this over about a week and a half, part time, with very little pre-existing knowledge or experience with any of the tech involved.
  • I had fun!

Perhaps I should try to hype up more of the technical aspects of that list, or try to pull out more of what I accomplished to make it flashier, but, honestly, I feel like the greatest triumph is that I had fun. The interview process is often a harrowing one, where you need to be technically competent as well as good at selling your background and skills to people you’ve never met before, and about whom you know almost nothing about what they’re truly looking for. It can be nerve wracking, and the fact that I had fun, and got to learn about and use some tech I’ve been curious about for a while, feels like a true triumph to me.

High-level Design Decisions

With that out of the way, let’s talk about the high level design decisions made for this project. Here I’m using design in a broader sense than just deciding the look of the UI, but encompassing all aspects of building this project.

Since no design happens in a vacuum, we’ll start by discussing the context this project existed and exists in, then tackle the various decisions made, starting at the proverbial 30,000 foot view, and zooming in.

I won’t promise that this will be an exhaustive review of all of the decisions made, but it should cover at least the foundational ones.

Project Context

It’s for an interview

The first, and perhaps most important, piece of context is that I created this project as part of the interview process for a Senior Software Engineer position at Gravie Inc., and thus the most important goal for this project is that it get me a job at Gravie, or at the very least progress me to the next stage of the interview process.

One piece of context to add up front is that Gravie is aware that the majority of my professional experience is in the backend, and primarily (though not exclusively) in Clojure.

With that in mind, here are some parts of the job posting that struck me as both relevant, and also something that can be shown in a solo sample project.

You will:

  • Work towards a goal of continuous deployments. We currently deliver changes within two-week iterations culminating in a release, but understand the value of more frequent continuous delivery, and are adapting our tools and processes to support deployments as soon as changes are ready
  • Work on a major ongoing architecture overhaul that affects all services, infrastructure, and supporting processes
  • Manage the production operations of the services that your team owns, and incorporate changes into the current development to improve operations
  • Demonstrate commitment to our core competencies of being authentic, curious, creative, empathetic and outcome oriented

From this I’ll pull the following goals/bonus points to add to the project context:

  • Show some kind of continuous deployment functionality
  • Demonstrate authenticity, curiosity, creativity, empathy, and outcome orientation

You bring:

  • Solid programming background and a passion for writing code. You are eager to learn more and enjoy providing and receiving critical feedback
  • Advanced programming experience in at least a few of the following programming languages: Clojure/ClojureScript, Groovy, Python, Java, JavaScript, Elixir, Kotlin
  • Knowledge and experience with different programming paradigms such as functional programming, object oriented, and declarative programming
  • Experience with Clojure/ClojureScript, Groovy/Grails and JavaScript frameworks such as React, Ember, Vue.js, or AngularJS
  • Solid knowledge of key value stores, SQL, and relational databases; preferably MySQL
  • Have a great understanding of the value of automated tests, and ability to implement them across the whole stack
  • Solid understanding of working in Linux shells
  • Ability to collaborate with designers, product owners, and other cross-functional team members
  • Experience working across the full stack, from user experience, to API design, to infrastructure
  • Demonstrate commitment to our core competencies of being authentic, curious, creative, empathetic and outcome oriented.

From this I’ll pull the following as goals for the project:

  • Show an eagerness to learn things
  • Advanced programming experience in CLJ/CLJS/JS
  • Knowledge and experience with different programming paradigms
  • Experience with CLJ/CLJS and React

I love automated tests (and am curious about writing frontend tests with CLJS), but writing tests is going on the future work list, for reasons I’ll explain in a later section.

All of the collaboration elements – providing/receiving critical feedback, collaborating cross-functional team members, demonstrating empathy – above are hard (impossible?) to demonstrate in a solo project, though I love doing those things (and secretly/not-so-secretly wish part of the project involved working with other cross-functional team members).

Extra credit:

  • Experience with Docker and containerized environments
  • Experience with Serverless technologies and AWS Lambda
  • Experience with client side unidirectional data flow patterns
  • Knowledge of building out pipelines using infrastructure-as-code tools such as AWS CDK

From this I’ll pull the following as goals for the project:

  • Demonstrate experience with serverless technologies
  • Demonstrate experience with client side unidirectional data flow patterns

I wanted to leverage Cloudflare’s Pages technology for the static assets, and the Pages Functions functionality for the backend serverless code, so I’m not going to touch on containers or AWS Lambda in this project, though it would be fairly straightforward to add both.

I also am not going to touch on infrastructure as code, even though setting up, say, terraform for cloudflare isn’t that difficult, it tends to take a bit of time, and I’m not confident it’d have a sufficiently positive impact on my interview process when balanced against the time it’d take to write out.

The project itself

The second piece of context is the synopsis from the original readme:

For this challenge you will consume the Giant Bomb API to create an application that will allow a
user to search games and "rent" them. The application should consist of at least two unique pages
(`search` and `checkout`). Your view should display the game thumbnail and title, and the rest is up
to you. You can use any language and or framework you'd like. 

From which we can add that our app must have or do the following:

  • Have a search page
  • Allow users to search for games using the Giant Bomb API
  • The games displayed should show the game’s thumbnail and title
  • Have a checkout page
  • Allow users to “rent” said games
  • Every other decision is up to us
  • We can use any language and framework we’d like

This is a good start for a problem description, but it’s also pretty sparse, which had me concerned that just building that functionality and putting little/no effort into styling or a couple extra pieces of functionality would leave a bad impression.

(Sidebar: Why might it leave a bad impression, you ask? Perhaps part of the “test” of the project is that building the synopsis, as stated, is actually insufficient for moving to the next round, but no one would say so, and I’d be rejected even though I could have built more. I’ve had interviews at other companies that worked like that, which was a bummer, as I could have built what they wanted had they asked for it.)

As such, I reached out to the folks at Gravie who’d posed the project and asked the following question:

How much is enough?

When given a somewhat open ended prompt, I can tend to over-polish it, never quite sure if the prompt-as-written is enough to move on to the next stage, or if there’s secretly more being hoped for. I normally work with stakeholders on projects to resolve ambiguities, but in the case of interview-specific projects it’s never immediately clear on who the stakeholders would be, or how much time they’d like to spend hashing out ambiguities.

So, to avoid endlessly working on this project and never actually present it, my current plan was to build specifically what was asked for in the README and then check in with you both to see if that was sufficient to engender confidence in moving to the next phase, or if there were specific things you were hoping to see that I hadn’t covered yet. I’d like to make sure the work I’m doing is giving good signal for the things you’re looking for, and this seemed like the simplest approach to me.

Does that sound like a good approach for you both? I’m also open to other approaches, so I welcome alternatives :)

Gravie replied:

Keep in mind that this is just a sample of your work, it is not expected to be production ready code!

Perhaps my favorite part of the project is the discussion with you about everything else that would have to be done to take it further. One good approach to that is to keep a running list in a README about future work as if it were to be taken all the way to production.

In short, show us your work with the intent to impress us AND to stimulate further discussion.

From this we can add the following pieces of context:

  • Their expectation is that this is only a sample of my work, from which I presume that having some rough edges is ok
  • Having a list of things I didn’t build, or would build next, is a good signal for Gravie
  • Whatever I build, I should build it with the intention of impressing them, and also with the intention of stimulating further discussion

I’m both grateful for the response – everyone at Gravie has been really lovely, and I’m not saying that just cause they might read this 😂 – and also I would have loved more specifics on what they find impressive, as the list of possibly impressive things is likely infinite.

That said, I’d only known these folks for a 45 minute interview, and wasn’t sure if seeking more details about what impressive means to them would come across well or not. Since I couldn’t be sure what kind of impact that’d have on my prospects, I chose instead to build what would impress me, and hope that they’d also find it impressive (and hopefully ask for anything else they wanted to see).

Context Summary

So, as a list, here’s the context influencing all decisions for this project:

  • The core goal of the project is to impress the folks at Gravie well enough to move me to the next phase of the interview process
  • Gravie knows that the majority of my experience is on the backend
  • Show some kind of continuous deployment functionality
  • Demonstrate authenticity, curiosity, creativity, empathy, and outcome orientation
  • Show an eagerness to learn things
  • Demonstrate advanced programming experience in CLJ/CLJS/JS
  • Demonstrate knowledge and experience with different programming paradigms
  • Demonstrate experience with React
  • Demonstrate experience with serverless technologies
  • Demonstrate experience with client side unidirectional data flow patterns
  • It must use the Giant Bomb API to search for games
  • It needs a discrete search page
  • Each game displayed must show the game’ thumbnail and title
  • It needs a discrete checkout page
  • It must allow users to rent games
  • Every other decision is up to me
  • I can use any language and/or framework I want
  • Their expectation is that this is only a sample of my work, from which I presume that having some rough edges is ok
  • Having a list of things I didn’t build, or would build next, is a good signal for Gravie
  • Whatever I build, I should build it with the intention of impressing them, and also with the intention of stimulating further discussion
  • I do not actually know what they would find impressive, so I will instead aim to impress myself and hope that we happen to find the same things impressive
  • Taking longer on the project has a risk of diminishing how impressive Gravie finds it, so I need to incorporate speed of delivery into the equation when making design decisions

30,000ft View

Now that we know the influencing forces behind the project, let’s sort out some of the major decisions.

First off, we know that we need to have a UI, and thus some kind of frontend, and, since Giant Bomb doesn’t implement CORS, we’ll also need a backend since browsers will block cross-origin requests to any resource that doesn’t include the right CORS headers. This is inconvenient for our small project – which will never actually see production – but very good for the world, so we’ll add a backend.

We’ve got the following data needs:

  • We’ll need an API key to make search requests to Giant Bomb.
  • A way to store the search results so that they can be rendered to the user
  • A way to store the games a user wants to rent, specifically to support a checkout page

How much backend, and how much frontend?

The vast majority of my experience is with backend code, so it’d be reasonable to assume that I’d want to lean heavily into the backend and make a sparse frontend. That, however, isn’t the direction I decided to go in, and here’s why.

First, from my experience with consulting, and from working with PMs/Users/non-technical stakeholders, I have first hand experience that a sparse or ugly UI immediately leaves a bad impression that can overshadow everything else that’s going on. It’s the classic sell the sizzle, not the sausage adage, and when mixed with the fact that I don’t know what Gravie will find impressive, I’m going to try to lean into the sizzle more.

Second, having worked so much in the backend, and knowing the limited scope of this project, I know that there aren’t any computational constraints – such as fast CPU or lots of memory – that would benefit from having a backend. Everything that needs to be done – save the CORS part – can be done in just about any modern browser as well as in any backend.

Third, there’s not a lot of novelty, for me, in building a backend for this project, which also leads me to feel less impressed by writing one. Now, Gravie doesn’t know me well, so they may see whatever kind of backend I’d write – likely something using reitit, malli, pedestal, and then probably mysql because Gravie uses it and it’d be good to incorporate tech they’re using – and would be impressed, but I can’t know that with any confidence. At this point in the CLJ ecosystem’s lifecycle, the kind of backend this project would need is pretty bog-standard, and thus I don’t think it’d stand out enough, or properly give a sense for the scope of work I can do.

So, given all of that, I decided to put most of the work into building the frontend, and keep the backend as simple as possible. This meant, in effect, making it a simple proxy for the Giant Bomb API. I’ll talk more about the specific choices around building that in a later section.

Since our backend will be a simple proxy for Giant Bomb, we’ll meet our data needs in whatever frontend tech we choose.

How much infrastructure?

Similar to the backend, I’ve done a lot of ops/devops/infrastructure work, and know that there isn’t, fundamentally, a lot of interesting infrastructure needed for this project. We need something to serve the frontend assets, something to handle HTTP requests sent to the backend. It can get a little more complicated if we want a live version of this running somewhere – I don’t memorize a list of all possible combinations of infra you’d need to accomplish this because it’s simple enough to put all of the pieces together once you begin doing it, but it includes at least DNS records, some amount of networking, and one or two somethings serving assets and handling backend requests – but a live version running somewhere isn’t required for this project.

It is, however, impressive to have a live version, which means I’d like to have one while spending as little time and effort on infrastructure as possible.

10,000ft View

Zooming in, we now need to make choices about the major frontend and backend tech we’re going to use.

Frontend Tech

Now, I’ve decided to put the majority of my efforts into building a frontend, but I’ve got very little experience building frontends, which means just about every choice is a novel one, and I need to be aware of, and avoid as best as possible, potential complexity traps, since the frontend is a relative ocean of tech choices.

Among these potential complexity traps are:

HTML
everything’s a div, except when it isn’t, or shouldn’t be
  • I personally really like semantic HTML, but don’t know all the various element types, thus why this is a complexity trap
CSS
This is important for making something look nice, but, from the various FE coworkers I’ve chatted with, and the various blogs I’ve read, this sounds like it’s an even bigger ocean than HTML is
JS
Keeping the backend as a simple proxy means I’ll need to leverage JS in some form. It’s possible that HTML/CSS/web tech has advanced enough that this isn’t true anymore, but that’s well outside my wheelhouse, so I’m going to move forward with a JS SPA/UI framework. This also includes things like the Fetch tech requiring CORS from a remote resource before it will give a JSON payload back to the JS that requested it, though that’s a pretty shallow trap.

Since I don’t have extensive experience with frontend development, every tech choice’s set of potential complexity traps has, as far as I can know, roughly the same cardinality of infinity, with the following exceptions:

  • React
  • CLJS
  • Bootstrap/Material UI

Long, long ago, in what is now, perhaps, ancient history for the Web (2015, to be precise), I worked on a project using Om.Next (which was a CLJS framework over React), Datascript (an immutable CLJS DB), and Bootstrap (then just a CSS library, if memory serves).

I’ve also worked very sparingly on a project that was using CoffeeScript and Angular 1, which gave me an introduction to the various JS build tooling.

From those experiences, I definitely prefer ClojureScript and it’s build tooling, and I also learned that, as much as possible, it’s best to follow the crowd with JS libraries, as most issues you run into will have readily available answers on the web already.

Lucky for me, Gravie is using re-frame, which is a CLJS framework (built on reagent) for React, and doubly-lucky for me, I’ve been really curious about re-frame for a while and was looking for a reason to learn it!

For the styling, I’d like to leverage an existing CSS framework, with a preference for one that that’s been proven to work well with re-frame/reagent, so I minimize the number of complexity traps I might fall into.

Backend/Infrastructure Tech

Solving this decision meandered for a while, as I tried to find out of the box proxying solutions that would take a request and only rewrite the protocol/host/port portions of it, then forward it on. After looking into tinyproxy and socat for a bit and hitting dead ends, I realized that my premise – at the time it was don't build a backend at all – was limiting my understanding of solutions. A proxy that just rewrites the protocol/host/port and forwards the request on/returns the response is just a backend server, and I can very easily write code to take a request, take the bits it needs and hit the Giant Bomb API, returning Giant Bomb’s response.

Once I’d gotten past that mental hurdle, I realized that all I actually wanted to build was something to handle the request, and skip all of the server-starting, http-receiving, route-handling, etc-backend-stuff.

After experimenting with writing a little script to run with Node, I remembered that Cloudflare’s Pages and Pages Functions offering fits exactly with my goals for this project, and has the added benefit of letting me publish a live version of the project. I still needed a local something to act as my backend server, and luckily the local dev story with Pages is really great.

1,000ft view

Down at the 1,000ft level, we can get a bit more specific about the tech we’re going to use, and to what end.

Backend/Infrastructure

I had gained familiarity with the Pages and Pages Functions offerings during the early work on my blog series =autoflare=, and had already sorted out how to turn a Shadow CLJS project into a deployable Pages project in my Serving Up Fulcro post.

Unsolved, however, was whether to build the backend capability in CLJS and compile it to something that Functions can use, or minimize complexity and novelty by sticking to the language used in the various guides for Functions, which was JS.

While it was tempting to try to build the functionality in CLJS and compile it to a JS file – I’ve been curious about doing that for a while now – I decided to stick with the goal of doing as little work as possible for the backend, and minimizing potential complexity pits, and went with a JS file.

I was lucky and found this example for fetching JSON, which was precisely the functionality I was looking for. Since I was specifically not trying to build a backend, and not trying to demonstrate my skills with building backends, I copied that example, with attribution, into functions/api/search.js and did some minimal tweaking to get it to pass requests to Giant Bomb.

As part of this work, I chose to reuse the URL path and search params from the client side to minimize the boilerplate I needed to write.

Were this an app I planned to expose to the world, I likely wouldn’t allow clients to hit random parts of the Giant Bomb API, and instead have a subset we supported. For this project, since any user of the live version of the app has to put in their own API key, I figured any potential abuse vectors would be rendered pointless, since the user has to attach their identity to their requests.

Styling

For the styling, I wanted something that looked nice, but also could work easily with re-frame, and would work reliably.

I checked out re-com, but their website had a big warning that it was only tested on Chrome, and even though it should work, I didn’t want to risk Gravie looking at it on a browser other than Chrome and it looking funky. Also, I use Firefox and, when things are breaking, I don’t want to have to wonder if it’s because of how re-com functions in Firefox.

I then took a look at Material UI (now MUI), and attempted to use it directly as a JS dependency, but hit an issue where I couldn’t require the Button element’s namespace because of a missing _system.keyframes function deep inside MUI.

Wondering if this was because I was using the JS library incorrectly in my CLJS project, I looked for a re-frame/reagent wrapper and found reagent-material-ui. I had the same issue while using this library, so knew it was something with MUI itself, or with my JS dependencies. As a bonus, I learned about the need for reagent.core/adapt-react-class when interacting with JS React libraries, which I certainly would have stumbled on had I been able to require MUI right from the start.

Eventually, after much searching and comparing my project against the reagent-material-ui example project, I upgraded React to React 18 on a hunch and the issue went away.

Routing/history

I need to build two pages – search and checkout – and I could have done this with just a :current-page db key and some case statements in the root component, but I wanted to see what it’d take to get history support and linkable pages.

As part of that, I wanted my route definitions to be data – being pure data makes them dead simple to introspect, among many other benefits – and I was already aware of bidi so I was inclined to use that.

This left history manipulation as the final piece to address, and luckily the re-frame docs had a link to a blog post of someone doing routing with Silk (routing) and Pushy (history). After a brief review of Pushy, and how it interacts with Bidi, and I’ve got the routing and history sorted.

As a minor aside, I’ll mention that this app wasn’t so complex that I needed a full-blown routing library, but it’s reasonable that a full-featured app might need that, and I wanted an opportunity to try building this out. It’s more than the bare minimum required, but I’m glad I took the small amount of time to get comfortable with it.

Persistence?

I went back and forth on whether to build in persistence, as my intuition told me that it could be a rather large complexity trap. I knew that I didn’t want to build in backend persistence, as I’d already chosen to build as little as possible in the backend, which left frontend persistence.

I eventually decided to implement frontend persistence to workaround a gnarly navigation bug I had put in the Future Work bucket already. The re-frame external resources page again came in handy, listing two libraries (re-frame-storage-fx and re-frame-storage) that handled HTML5 Web Storage, and decided on re-frame-storage because it had a built in function for building an interceptor.

Were this project a real product, I may have dug a bit deeper into the underlying tech both libraries are using, to assess them more thoroughly, but the scope of this project didn’t warrant spending that much effort.

HTTP Requests

Returning to our friend, re-frame’s external resources page, we see two libraries for making HTTP requests:

re-frame-http-fx
This uses AJAX to make the requests
re-frame-fetch-fx
This uses the JS Fetch standard to make requests

There was no appreciable difference I could see between these two, though I didn’t have enough knowledge of the state of the art for making HTTP requests in the web space, so I search should I use js Fetch or xhrio on the web and found this stackoverflow answer which said:

fetch is newer and built around Promises, which are now the prefered way to do asynchronous operations in JavaScript

Past experience has told me that doing things differently from the preferred way in JS-land tends to make your life harder, I chose to go with re-frame-fetch-fx.

Debugging

I knew I’d want something to help me understand and debug the re-frame app, and the two options highlighted on the re-frame external resources page were:

Honestly, I couldn’t tell why one might be better or worse than the other, so I chose re-frame-10x because it’s from the same folks making re-frame. It worked pretty well, and I’d like to try out re-frisk at some point, just to see what it’s like.

Tribulations

In this section I’m going to talk about some of the more confusing speed bumps I hit along the way. Some of these have resolutions, and some just have workarounds.

Errant page navigation

Let’s start with something that seems to have found a resolution, but vexed me for a good portion of this project: errant page navigation.

See, with Bidi and Pushy, clicking a link element isn’t supposed to actually cause browser navigation, as that would throw away all of your transient state and be disruptive to the expected SPA experience. Instead, it should run a particular function on a link match and update the browser history and window location with the Google Closure goog.history.Html5History class.

However, something in how I’d originally set up the history handling caused, for reasons I couldn’t understand, some, but not all, clicks on navbar links to actually initiate browser navigation.

Now originally I’d set up a function to run before Shadow loaded new code in the routes namespace, which would stop the old history var’s event listeners and then start the newly defined history var’s event listeners. The goal here was to avoid redefining the history var, and thus be unable to stop the event listeners that were created, while still being able to redefine the match function.

Eventually it dawned on me that, similar to how I handle records – exactly like them, in fact, since they both end in creating objects – I can put all of the actual work into a function definition that then gets referenced, or passed in, to the object definition, relying on dev-time dereferencing to get whatever changed functionality I’ve implemented.

With that recollection in hand, I changed my def history to a defonce history, dropped the special function, and the nav issue went away.

Can’t require MUI Button component

Whenever I would attempt to bring in the MUI Button component, either by requiring the JS library directly, or through reagent-material-ui, it would throw an error complaining that _system.keyframes is not a function.

Try as I might, I could not find a solution to this via searching, nor through digging through the MUI codebase, but eventually I looked at the package.json for the example project for reagent-material-ui, and going file by file, I saw that I was using React 17 while the example project was using React 18. As this was the only seemingly meaningful difference between our two projects, I tried upgrading React to version 18, and the error went away.

MUI says it works with React 17, so I have no idea why this didn’t originally work, but the upgrade fixed it, which is good enough for me.

Using React 18’s CreateRoot caused re-frame subs to be deleted

Once I started using React 18, it started complaining in the dev console that ReactDOM.render wasn’t supported anymore and I should switch to createroot.

Since deviating from the happy path means you’re more likely to run into bad times in the JS world, I decided to see if reagent, and thus re-frame, supported the new CreateRoot yet.

This initially went well, until I started hitting some navigation bugs where clicking a nav link wouldn’t update the page. After looking in the re-frame-10x UI, I saw that the history event was firing properly, which meant the subscription in the root component must have been busted.

I then looked at all of the events around the time that the issue happened, and did indeed see that the subscription for which page to display was indeed getting deleted.

Since this behavior only started after I’d switched over to using CreateRoot, I suspected that was the cause, and reverted it. This resolved the issue, though I still don’t know why.

I’m assuming that whatever machinery re-frame uses to keep track of which subscriptions are still in view hasn’t been upgraded to work with the new CreateRoot functionality, and it’s mistakenly believing the root component isn’t in view anymore.

But that’s just a hunch. A mystery for another day 😄.

Fetch doesn’t return JSON to JS without CORS

After I’d hooked up re-frame-fetch-fx, and sorted out some of the minor issues with hitting the Giant Bomb API – f.e. Giant Bomb doesn’t implement CORS on their side – I was now getting successful responses from Giant Bomb, but which had an empty payload.

Wondering if this was a client oddity, or a library oddity, I copied the request as cURL and pasted it into my browser, successfully getting a JSON payload back.

I then did some searching and meandering around the web, eventually landing on the MDN CORS page, and learned that, in browsers, AJAX and Fetch requests will only return a JSON payload to JS code if it either satisfies the same-origin policy, or is hitting a remote resource that implements CORS.

I didn’t know this before I started this project, and, being ignorant of this fact, had hoped to only build a frontend. Learning this about browsers, however, lead me to building the leanest backend I could to satisfy the same-origin policy and not worry about CORS.

Game card spacing on the checkout page

After I’d built the home page, sans-borrowed games, I built the search page, displaying returned games with a MUI Grid2 and MUI Cards. These didn’t look great, but they didn’t look terrible, and that was the only bar we had to hurdle, so I was glad with them.

Next I went to build the checkout page, and, though I’d thought about various checkout UX flows, I realized that having a fancy checkout page wasn’t the point, and opted to display just the games in the cart with the same components as the search page. This didn’t work out precisely as planned, as the cards in the grid were way too thin, while they were an alright width on the search page.

After comparing the search and checkout page components, and trying to make them as similar as possible in hopes that I’d stumbled on something in the search page which I’d missed in the checkout page, I came up blank.

After reviewing this lovely guide on Flexbox from CSS-Tricks multiple times, and messing around with the styles in Firefox’s dev console, I learned that flexGrow 1 on the grid and cards lead to the checkout page cards looking ok, and the search page cards weren’t adversely affected.

The one downside was that, on the home page, if you only borrow one game, it’s card is HUGE.

Since demonstrate expert abilities with CSS wasn’t a goal for the project, I left it at that, but learned that you can’t really get away without learning CSS if you want to build web apps, even when leveraging a CSS/Component library.

Future Work

This is a non-exhaustive list of things I’d build next, were this a real product.

Unless otherwise specified, the reason I didn’t build these things is the same for all of them: they didn’t strike me as being strictly necessary for the project, nor sufficiently different from everything else that I was building to be demonstrative of my skills in a way that the other code wasn’t. Thus they didn’t seem worth building up front.

Pagination

The Giant Bomb API returns paginated results, and so, were we to turn this into a real product, we’d need to support pagination.

Handling search errors

We don’t communicate search or network errors in any user facing manner, and that’d need to change if we were to turn this into an actual product, since silent failures tend to destroy the user experience.

Progress-spinners while waiting for search results to come back.

It’s important to communicate to your users that something is happening if they can’t see it, and progress spinners, or some other indicator that the search results are still pending, is how I’d communicate that to users.

Tests

Aside from the general boon that automatic tests give to your confidence around the safety of changes, frontend tests is one of the areas that I have less experience in, and would like to get experience in. I’ve had a lot of experience manually testing frontend changes, or backend changes that impact the frontend, and I’d love to learn how to automate away that tedium.

Infrastructure as Code

Hand-rolling your infrastructure introduces delays, mistakes, and risk, so I would incorporate Terraform for managing the infrastructure as code.

Backend persistence

An app like this would need some kind of user-agnostic persistence for things like authz, preferences, tracking of borrowed and returned games, and so on, which would live in the backend.

Exactly what I’d use would depend a lot on expected use cases, but my first thought is to look at Cloudflare’s Durable Objects and their Worker KV offerings.

A Domain Model

This goes along with the backend persistence; we’d want to have a model for the various kinds of things in our product domain.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published