You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For LAR and MLAR, we have a node that validates the MLAR and LAR data by checking that the row counts are the same.
With the current cron job and job templates, jobs are generated with the --to-outputs kedro run parameter, which means this validation node is not run for LAR and MLAR datasets.
If we switch to using the --to-nodes kedro run parameter, in the job and cronjob templates, we can set the final validation node to be the target node. This would mean that both MLAR and LAR files are generated in the same kedro run, which will take longer, but allow us to validate the counts at the end. Otherwise, we should consider removing this node, since it isn't used.
The text was updated successfully, but these errors were encountered:
we can utilize tag to group these nodes together. Tag allows us to organize/group nodes into their business logic.
For example: we can add tag named public_modified_lar_flat_file_{year} and add this into create_mlar_flat_file and validate_lar_and_mlar_row_counts nodes, and change our job and cron templates to use this argument --tags=public_modified_lar_flat_file_{year}
For LAR and MLAR, we have a node that validates the MLAR and LAR data by checking that the row counts are the same.
With the current cron job and job templates, jobs are generated with the --to-outputs kedro run parameter, which means this validation node is not run for LAR and MLAR datasets.
If we switch to using the --to-nodes kedro run parameter, in the job and cronjob templates, we can set the final validation node to be the target node. This would mean that both MLAR and LAR files are generated in the same kedro run, which will take longer, but allow us to validate the counts at the end. Otherwise, we should consider removing this node, since it isn't used.
The text was updated successfully, but these errors were encountered: