-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate more scalable ways of pulling data for the site #378
Comments
I played around with the graphql explorer. {
r1: repository(owner: "ChariotEngine", name: "Chariot") {
...repoFields
}
r2: repository(owner: "duysqubix", name: "MuOxi") {
...repoFields
}
}
fragment repoFields on Repository {
url
homepageUrl
description
}
Afaik the index contains only the versions,name,deps,features,yanked. |
i wrote a script to combine the data from crates.io and the github's graphql api into a single csv file. Convert
|
Currently all of the GitHub and Crates.io data used on the site is retrieved via a clever template macro. This is simple and keeps the build self-contained, but has a few big issues:
I'm wondering if we might be able to find a better way of grabbing this data (e.g. via an external script or a Rust program). This could also allow us to store the site's data in a nicer format, rather than these massive manually ordered
data.toml
files.If we did this, there's more efficient options we could use for pulling the API data:
This might be overengineering things, but it's worth thinking about, I think!
The text was updated successfully, but these errors were encountered: