The Github dlt-dbt package offers data models to help you transform and analyze Github data. It's designed to integrate seamlessly with the dlt Github pipeline, which extracts and loads Github data into your data warehouse.
This package is perfect for dbt users who want to integrate Github data into their analytics workflows without building models from scratch.
- Staging Models: Clean and prepare raw Github data for downstream analysis.
- Mart Models: Pre-built dimension and fact tables for key Github entities like events, reactions.
- Incremental Loading: Supports incremental data processing to optimize performance.
- Easy Integration: Designed to work out-of-the-box with the dlt Github pipeline.
- dbt Core installed in your environment.
- Access to a supported data warehouse: BigQuery, Snowflake, Redshift, Athena, or PostgreSQL.
- The dlt Github pipeline is set up and running.
-
Install dlt:
pip install dlt
-
Configure the Pipeline: Follow the dlt Github pipeline documentation to set up your pipeline. Ensure you have your Github API key and destination credentials configured.
-
Run the Pipeline: Extract and load data from GitHub using the GitHub events pipeline into your data wareshouse. As GitHub pipeline integrates multiple sources. Here, we will build a model for the
github_events
source. For more details on sources, refer to the documentation.
- Install the Github dbt package into your dbt environment.
- Configure your 'dbt_project.yml' file with the appropriate connection details for your data warehouse.
- Ensure the data from your dlt Github events pipeline is available in your warehouse.
This is how the tables in dbt packages look like:
For Github events:
dbt_github_events/
├── analysis/
├── macros/
├── models/
│ ├── marts/
│ │ ├── dim__dlt_loads.sql
│ │ ├── dim_create_event.sql
│ │ ├── dim_delete_event.sql
│ │ ├── dim_fork_event.sql
│ │ ├── dim_issue_comment_event.sql
│ │ ├── dim_issues_event.sql
│ │ ├── dim_pull_request_event.sql
│ │ ├── dim_pull_request_review_comment_event.sql
│ │ ├── dim_pull_request_review_event.sql
│ │ ├── dim_push_event__payload__commits.sql
│ │ ├── dim_push_event.sql
│ │ ├── dim_watch_event.sql
│ ├── staging/
│ │ ├── stg__dlt_loads.sql
│ │ ├── stg_create_event.sql
│ │ ├── stg_delete_event.sql
│ │ ├── stg_fork_event.sql
│ │ ├── stg_issue_comment_event.sql
│ │ ├── stg_issues_event.sql
│ │ ├── stg_pull_request_event.sql
│ │ ├── stg_pull_request_review_comment_event.sql
│ │ ├── stg_pull_request_review_event.sql
│ │ ├── stg_push_event__payload__commits.sql
│ │ ├── stg_push_event.sql
│ │ ├── stg_watch_event.sql
│ ├── dlt_active_load_ids.sql # Used for incremental processing of data
│ ├── dlt_processed_load.sql # Used for incremental processing of data
├── tests/
├── dbt_project.yml
└── requirements.txt
Execute the dbt models to transform the raw Github data into useful tables:
dbt build
While this package provides a solid foundation, you can customize it to suit your specific needs:
- Modify the models to align with your business logic.
- Add relationships between tables by modifying your dlt pipeline schema.
The dimensional modelling part of the package was created with a declarative code generator and suffers from limitations inherent to modelling raw data directly. We advise you consider the raw data tables and adjust the modelled layer as needed.
The dbt model above can be further customized according to the requirements. Using this package you'll get a basic template for data model which can be further modified as required.
-
The schema of data modelled Github events above using dlt-dbt-generator:
Here's the link to the DB diagram: link
⚠️ Note:Please note that this is a starting template for your data model and is not the final product. It is advised to customize the data model as per your needs.
This package was created using the dlt-dbt-generator by dlt-plus. For more information about dlt-plus, refer to the dlt-plus documentation. To learn more about the dlt-dbt-generator, consult the dlt-dbt-generator documentation.
The dimensional modelling part of the package was created with a declarative code generator and suffers from limitations inherent to modelling raw data directly. We advise you consider the raw data tables and adjust the modelled layer as needed