-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial OpenAPI spec - Collection endpoints #2671
Conversation
This includes a large `components.js` taken from initial efforts of benonymity. This file still needs to be cleaned up but is provided as a starting point.
This file is provided to exercise endpoints and verify responses using `wiretap`. This is only for initial testing and will be replaced by chai or other automated testers.
Beginning cleanup of examples for linter warnings
Performed some schema cleanup and cleaned up some OpenAPI spec linting issues. These schemas have not been validated, but do make those source files shorter. This cleanup does not affect existing endpoints for this PR. |
After continuing work with the What I have done so far is to put the schema definitions in the corresponding I think there should instead be a single large comment at the top of the corresponding |
After additional work and attempts, this PR is not the best way to do things. Even if the method in #2803 is not the solution, the spec in this PR needs to be reworked a bit for 3.0 vs 3.1 and thinking through the file organization. |
PR Overview
This PR is initial work to get an OpenAPI 3.1 spec added to JSDoc comments in the source code. This work was based off of work initially done by @benonymity and information from @Sapd. The PR includes (most) of the schemas defined in the
server/objects/
files based on the current Slate API documentation (which is out of date). I have not validated all of these schemas because I was only focusing on the ones required to get the collections endpoints to work. These other schemas still need to be cleaned up, such as the Minified, Normal, and Extended responses for data types and have better examples given, and I included them because it is a lot easier to edit an existing schema than writing them from scratch (and benonymity had already made those schemas from the Slate API docs, so this should include all of the schemas he worked on).As per Sapd's initial recommendation, I am using
swagger-jsdoc
to generate YAML files directly from the source code, done by runningnode swagger.js
in the root directory of the repository. The resulting specification,swagger-output.json
, is located inbuild-docs/
. I also have created a very poorly writtenapi_runner.py
script that I am using to test API endpoints. It's pretty hacky and I can remove it from the PR, but am including it for initial review to see if a cleaned-up version would be beneficial during this initial work.I will continue working through updates for each of the endpoints and schemas. If we want to do this a different way that is okay with me, just trying to help get the ball rolling.
Further discussion
To simplify support for multiple API versions, I would like to use a route-based versioning strategy. I think we should provide separate specification files for each version instead of putting everything in the same file with various versions.
At some point, we will need to address documentation translations, as all official documentation is only in English. I am not sure how best to approach this, but it looks like the easiest way will be using overlays similarly to how the server/client strings are managed.
We should implement automated testing for the API spec. Initially, it only needs to validate request/response formatting to ensure the data types match up and conform to the spec. This will be useful during the API rewrite and future database changes as it will alert of data type changes without needing to constantly update tests for different internal data operations.
My current testing procedure
I have written a Python script which connects to a dev server through
wiretap
(https://pb33f.io/wiretap/), which performs validation of requests and responses against an OpenAPI spec. I have also put wiretap in between the dev client and server to alert me of endpoints that are not in the spec while navigating manually in the client (though right now there are many failures because I've been focusing on collections). This has been working well andwiretap
has a Docker image we could use as part of automated testing, but I am not sure how the e2e testing will work in GH Actions so this automated testing would still need to happen in local dev environments right now.The creator of wiretap has made the
vacuum
tool to lint OpenAPI specs. This has helped me identify duplicate descriptions or other issues with the generated spec. Both wiretap and vacuum are in active development, so they definitely have some bugs and features missing but are quick to set up and work well enough for my process.We can switch to something like https://www.npmjs.com/package/chai-openapi-response-validator to integrate with the existing chai test framework, but that means we need to use OpenAPI 3.0 instead of 3.1 and am still not sure how well that testing will work as a GH Action.
Next steps
The only decision that needs to be made before completing the initial OpenAPI spec is whether we want to switch to 3.0 or continue with 3.1.
The following blockers need to be discussed before continuing with a new API version:
Additional reading on spec organization: https://blog.liblab.com/why-your-open-api-spec-sucks/