Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE 2716] Create user tables for basic login.gov #2760

Draft
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

babebe
Copy link
Collaborator

@babebe babebe commented Nov 7, 2024

Summary

Fixes #{2716}

Time to review: 10 mins

Changes proposed

3 user tables
migration script
updated factories to create new users

Context for reviewers

Users tables will be used for Oauth2

Additional information

Screenshots, GIF demos, code examples or output to help show the changes working as expected.

@babebe babebe changed the title add user tables [ISSUE 2716] Create user tables for basic login.gov Nov 7, 2024
api/src/db/models/user_models.py Outdated Show resolved Hide resolved
api/src/db/models/user_models.py Show resolved Hide resolved
chouinar and others added 5 commits November 7, 2024 16:03
…ults (#2730)

## Summary
Fixes #2729

### Time to review: __3 mins__

## Changes proposed
Set `track_total_hits` to True when calling OpenSearch

## Context for reviewers
While this field says it has possible performance cost due to needing to
count all the records, we also request a count for various facet counts
anyways, so I imagine this won't matter at all.

## Additional information
https://opensearch.org/docs/latest/api-reference/search/

I loaded ~16k records into my local search index. Querying it with no
filters returns this pagination info now:
```json
 {
    "order_by": "opportunity_id",
    "page_offset": 1,
    "page_size": 25,
    "sort_direction": "ascending",
    "total_pages": 676,
    "total_records": 16884
  }
```
## Context

This is currently failing a lot of CI builds
## Summary
Fixes #2665 

### Time to review: __1 min__

## Changes proposed
> What was added, updated, or removed in this PR.

Added `gh-transform-and-load` command to existing `make gh-data-export`
command. I'm not sure if this is sufficient or correct, but I'm taking a
guess based on what I see in
#2546 and
#2506.

## Context for reviewers
> Testing instructions, background context, more in-depth details of the
implementation, and anything else you'd like to call out or ask
reviewers. Explain how the changes were verified.

In the analytics work stream, we have a new CLI command `make
gh-transform-and-load` for transforming and loading (some) GitHub data.
Per issue #2665, that command should be run daily, after the existing
`gh-data-export` command which exports data from Github.

I see that `scheduled_jobs.tf` seems to be the mechanism by which `make
gh-data-export` runs daily. In this PR I'm taking and educated guess and
attempting to add `gh-transform-and-load` to the existing job, and
requesting feedback from @coilysiren as to whether this is the correct
approach.

## Additional information
> Screenshots, GIF demos, code examples or output to help show the
changes working as expected.

Co-authored-by: kai [they] <[email protected]>
## Summary
Fixes #2665 

### Time to review: __1 min__

## Changes proposed
> What was added, updated, or removed in this PR.
Added scheduled job to run `make init-db` 

## Context for reviewers
> Testing instructions, background context, more in-depth details of the
implementation, and anything else you'd like to call out or ask
reviewers. Explain how the changes were verified.

The GitHub data export, transform, and load job (see
#2759) depends on a
certain schema existing in Postgres. This PR creates a job to ensure the
schema exists.

## Additional information
> Screenshots, GIF demos, code examples or output to help show the
changes working as expected.
### Time to review: __1 mins__

## Context for reviewers

Platform's assertion is this: whenever a deploy fails for any reason, it
cancels the deploy, which locks the other 3 jobs. Those 3 jobs remain
locked indefinitely. On the next deploy, every job but 1 is locked, but
the other 3 jobs fail because they were locked prior, which causes 1
first job to be canceled, and thusly all 4 jobs are locked. It's an
avalanche effect. Whenever 1 deploy fails, all 4 fail that point
onwards.
@babebe babebe requested a review from chouinar November 8, 2024 15:56
api/src/db/models/user_models.py Outdated Show resolved Hide resolved
api/tests/src/db/models/factories.py Outdated Show resolved Hide resolved
Comment on lines +37 to +39
first_name: Mapped[str]

last_name: Mapped[str]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE: regardless of anything else, we should hold on merging this until I've gotten more clarification. We might not have first/last name as those require ID proofing and I don't know if we intend for users to be ID proofed 100% of the time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Converted this PR to a Draft PR so we don't accidentally merge

@babebe babebe marked this pull request as draft November 8, 2024 17:03
@babebe babebe requested a review from chouinar November 8, 2024 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants