This is the OPRE Portfolio Management System, or OPS. The finished product will replace OPRE's prior system, MAPS. The purpose of OPS can be found on the wiki.
At a bare minimum, you need Docker and Docker Compose installed to run the application locally. If you want to do development, you will also need to install Python, Node.js, and pre-commit.
The backend uses RSA keys to sign and verify JWTs. You can generate these keys by running the following commands...
mkdir ~/ops-keys
openssl genrsa -out ~/ops-keys/keypair.pem 2048
openssl rsa -in ~/ops-keys/keypair.pem -pubout -out ~/ops-keys/public.pem
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in ~/ops-keys/keypair.pem -out ~/ops-keys/private.pem
Then place the private/public keys in your shell environment...
export JWT_PRIVATE_KEY=$(cat ~/ops-keys/private.pem)
export JWT_PUBLIC_KEY=$(cat ~/ops-keys/public.pem)
Also, replace the public key file contents in the following locations...
cat ~/ops-keys/public.pem > ./public.pub
cat ~/ops-keys/public.pem > ./backend/ops_api/ops/static/public.pem
N.B. The public key files above are deprecated and will be replaced with the JWT_PUBLIC_KEY
environment variable in the future.
We use pipenv to manage our Python dependencies. Follow the directions on their website to install it on your machine.
To install the dependencies, run...
cd ./backend/ops_api/
pipenv install --dev
We use bun to manage our Node.js dependencies.
To install the dependencies using the latest baseline and tested versions, run...
cd ./frontend/
bun install --frozen-lockfile
To install, or upgrade, the dependencies using the cutting-edge, but compatible versions, run...
cd ./frontend/
bun install
We have a Docker Compose configuration that makes it easy to run the application.
To run the application using the vite development server (allows hot reloading)...
docker compose up --build
To run the application using the production server configuration...
docker compose up db data-import backend frontend-static --build
To run the application using the minimal initial data set...
docker compose --profile data-initial up --build
To run the application using the demo data set...
docker compose --profile data-demo up --build
Whether you run the application through Docker or locally, you can access the frontend at http://localhost:3000
and
the backend api at http://localhost:8080
.
The backend api utilizes pytest.
To run them...
cd ./backend/ops_api
pipenv run pytest
Note: All backend API endpoints have the ability to simulate an error response, for testing purposes for the frontend. This is accomplished
through passing the simulatedError=true
query parameter. It will automatically return a status code of 500 whenever this is done. It can
be customized further by choosing the status code and passing that, so simulatedError=400
sends back a 400 code rather than a 500 code. This will override any other processing the endpoint would normally do and just return the response, giving a simple mechanism for frontend development and/or testing to validate it works with error conditions from the backend.
The frontend tests are implemented through Vitest.
To run them...
cd ./frontend/
bun run test --watch=false
This runs them once and then exits. You can remove the --watch=false
if you want to continually rerun the tests
on each file save.
You can also get code coverage information by running...
cd ./frontend/
bun run test:coverage --watch=false
We require 90% code coverage.
Note: Currently E2E tests require you to have a local stack running for Cypress to connect to.
This can be achieved by running the docker-compose.yml
via docker compose
.
docker compose up --build
End-to-end (E2E) can be run from the frontend
via:
bun run test:e2e
or Interactively via:
bun run test:e2e:interactive
N.B. Running the E2E tests multiple times using the same containers and volumes can lead to unexpected results.
It is recommended to run docker system prune --volumes
between test runs.
The backend linting is implemented using flake8. We use nox as
the runner to execute flake8
.
To run linting...
cd ./backend/ops_api
pipenv run nox -s lint
The linter may complain about violations in the Black code formatting. To automatically fix these issues, run...
cd ./backend/ops_api
pipenv run nox -s black
If you're running within a pipenv shell
, you may ommit the pipenv run
prefix and run the commands as nox -s <command>
.
The frontend linting is implemented through ESLint.
To run linting...
cd ./frontend/
bun run lint
You can automatically fix many linting errors by passing in --fix
.
cd ./frontend/
bun run lint --fix
We use pre-commit hooks to help keep our code clean. If you develop for OPS, you must install them.
pre-commit install
These checks will run automatically when you try to make a commit. If there is a failure, it will prevent the commit from succeeding. Fix the problem and try the commit again.
TBD
TBD
TBD
TBD
TBD
With the move away from Django, we need to create a new process/tooling for generating the Data Model diagrams from SQLAlchemy or directly from the DB.
When updating the SQLAlchemy models, you will need to generate a new migration script for the database schema. This is done using Alembic.
First start the DB and update it to the latest version...
docker compose up db data-import --build
To generate a new migration script, run...
cd ./backend/
alembic revision --autogenerate -m "Your migration message here"
This will create a new migration script in the ./backend/alembic/versions
directory. Review the script to
ensure it is doing what you expect. If it is, you can apply the migration to the database by running...
cd ./backend/
alembic upgrade head
If you need to rollback the migration, you can do so by running...
cd ./backend/
pipenv run alembic downgrade -1