-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expanded danish queries #349
expanded danish queries #349
Conversation
Thank you for the pull request!The Scribe team will do our best to address your contribution as soon as we can. The following is a checklist for maintainers to make sure this process goes as well as possible. Feel free to address the points below yourself in further commits if you realize that actions are needed :) If you're not already a member of our public Matrix community, please consider joining! We'd suggest using Element as your Matrix client, and definitely join the General and Data rooms once you're in. Also consider joining our bi-weekly Saturday dev syncs. It'd be great to have you! Maintainer checklist |
This generally looks good, @Kehindeadebisi, but can we expand the forms we're getting back for adjectives? See wikidata.org/wiki/Lexeme:L43599 and maybe check a few others to see the forms we can get :) |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work here, @Kehindeadebisi! 😊 One thing to note, when we're expanding queries the goal is a combination of properties such that each form is returned uniquely. It sometimes can be a bit difficult, but it's important to combine the properties and not just have one property per returned column as that means that we get repeat rows :)
Noted, thanks for the feedback! |
Contributor checklist
Description
This pr focuses on expanding the src/scribe_data/language_data_extraction/Danish files with as much data as possible from Wikidata, building on existing patterns used in other language data extractions.
Data types included are:
Adjectives
Adverbs
Emoji-keywords
I tested the queries in the Wikidata query service and ran the py file though did not include the json output as advised
Related issue