You always need at least 2 things for this:
- a UUID v4
- UUID can be generated with
python -c 'print(__import__("uuid").uuid4())'
or by e.g. uuidgenerator
- UUID can be generated with
- a name
Note that scrapers/tide-provider PRs MUST be merged after k8s-config/api since the API needs to know about the lakes and scrapers depend on k8s-config.
At least one of scrapers or tide-provider should be implemented for the new lake since a lake with no features doesn't make sense (it won't be displayed by any client).
The api simply needs to know 3 things about the lake:
- UUID (see above)
- name (see above)
- features
features are currently limited to
- temperatures (scrapers)
- tides (tide-provider)
The example PR only contains info about temperature
/supportes_booking
(deprecated), the initial tides
migration can be found here: V8__EnabledTides.sql
scrapers uses k8s-config for the lake UUIDs
We're using scrapy here to scrape data from several different sources, check the current implementations to see whether the website you want to pull data from might have been implemented already.
All lakes for this provider have been present from the start, no example PR yet
Currently we're using yearly data from Federal Maritime and Hydrographic Agency of Germany (Bundesamt für Seeschifffahrt und Hydrographie) (germany only).
This is currently executed manually once a year.
If you want to provide tidal data in a different way, please open an issue about this in the tide-provider repo.