- I used squirrel to learn the library a bit and use this as an educational exercise. I like it, but scanning values into objects needs some love.
- I tried to use the standard library for muxing/route handling/route arguments/query params. I won't do that again. Next time, i'll use something like https://gin-gonic.com/
- Some of the patterns here would need to be flushed out more for a real solution.
- Obv missing tests.
- None of the extra credit was completed.
- There is no
sighting_data
property. I piped through the ability to filter exactly onsighting_date
, however what further discovery on making this actually usable would be required before implementing a system. - It seems like there are duplicate Equipments for a single Waybill, which doesn't make sense to me, but that's what gets returned.
- Follow the below to get the postgres database up.
- Then run cmd/hydrate/main.go to run.
go run cmd/hydrate/main.go
. - Then run
go run cmd/server/main.go
to run the server.
This repo has everything you need to complete the take-home assignment. Know that we are excited about you as a candidate, and can't wait to see what you build!
- Python 3.4+
- Postgres OR you can run the database via Docker and Docker-Compose using the provided
docker-compose.yml
file - Falcon
- SQLAlchemy - database toolkit for Pythion
- Alembic - database migrations
The Falcon project scaffold is inspired by falcon-sqlalchemy-template
- Fork and clone this repo onto your own computer
- Start your database server
OR
- Copy
.env.sample
to.env
and set the values appropriately - Run the database with the command
docker-compose up -d
- Copy
- Depending on the values you used in your
.env
file, set theSQLALCHEMY_DATABASE_URI
environment variable to point to your database. For example,
export SQLALCHEMY_DATABASE_URI=postgresql://candidate:password123@localhost:5432/takehome
- Change directory to the
webapp
directory and runpip install -r requirements.txt
to install required dependencies - In the same directory, run
gunicorn --reload api.wsgi:app
to run the web application
The API will be exposed locally at http://127.0.0.1:8000
Run curl http://127.0.0.1:8000/health/ping
to test your server. It should return the following JSON:
{"ping": "true"}
It is recommended you create a Python virtual environment for running your project
Add new migrations with
alembic revision --autogenerate -m "migration name"
Upgrade your database with
alembic upgrade head
- you provide clear documentation
- any code you write is clear and well organized
- you spend no more than 3-4 hours total on the project
BONUS you provide tests
In the data/
are 4 files.
locations.csv
- a list of locations. Theid
field is the internal, autogenerated ID for each location.equipment.csv
- a list of equipment (i.e., rail cars). Theid
field is the internal, autogenerated ID for each piece of equipment. Theequipment_id
field should be considered the primary key for creating relations to other files.events.csv
- a list of tracking events. Theid
field is the internal, autogenerated ID for each tracking event. The fieldwaybill_id
is a foreign key to the waybills file. The fieldlocation_id
is a foreign key to the locations file. The fieldequipment_id
is a foreign key to the equipment file.waybills.csv
- a list of waybills. A waybill is a list of goods being cariied on a rail car. Theorigin_id
anddestination_id
are foreign keys to the locations file. The fieldequipment_id
is a foreign key to the equipment file. Theid
field is the internal, autogenerated ID for each waybill. Theroute
andparties
fields contain JSON arrays of objects. Theroute
field details the rail stations (AKA "scacs") the train will pass through. Theparties
field defines that various companies involved in shipping the item from its origin to its destination (e.g., shippers, etc.).
NOTE: All dates are in UTC.
Implement a data ingestion pipeline that allows you to ingest the 4 CSV files into your database for use with your web application (see user story number 2). Provide clear documentation on how to invoke your pipeline (i.e., run this script, invoke this Makefile target, etc.). Assume that the pipeline can be run on demand and it should drop any existing data and reload it from the files.
Finish implementing the the scaffold Falcon app to read data from your database and provide the following routes:
/equipment
- data from equipment.csv/events
- data from events.csv/locations
- data from locations.csv/waybills
- data from waybills.csv./waybills/{waybill id}
- should return information about a specific waybill/waybills/{waybill id}/equipment
- should return the equipment associated with a specific waybill/waybills/{waybill id}/events
- should return the events associated with a specific waybill/waybills/{waybill id}/locations
- should return the locations associated with a specific waybill
All the routes should return JSON.
Any event route should allow for filtering by the sighting_data
field
Note: This user story is optional, and on an "if-you-have-time" basis.
Provide a * /waybills/{waybill id}/route
- should return information about the route associated with a specific waybill
Note: This user story is optional, and on an "if-you-have-time" basis.
Provide a * /waybills/{waybill id}/parties
- should return information about the parties associated with a specific waybill