A data pipeline to process real-time stock market data using Kafka, Spark, and Apache Iceberg with dbt for transformations and Prefect for orchestration.
The idea is to build a real-time stock market data pipeline that streams live stock price data, processes it for analytics (e.g., calculating moving averages, identifying trends), and stores it for reporting and analysis. This project integrates tools like Kafka, Spark, Airbyte, dbt, and more.
- Data Ingestion (Kafka): Stream live stock market data.
- Data Processing (Spark): Real-time data processing (e.g., compute moving averages, aggregate data).
- Data Storage (Iceberg): Store the processed data in a data lake.
- ETL/ELT (Airbyte or Fivetran): Bring in external datasets (e.g., company info, historical stock data).
- Data Transformation (dbt): Transform raw data into analytical models.
- Orchestration (Prefect or Dagster): Manage and schedule data pipeline workflows.
- Data Validation (Great Expectations): Ensure data quality at each stage.
docker network create stock-network
docker-compose up --build
To view the data being sent to Kafka:
docker exec -it kafka-producer-1 /bin/bash
python /app/producer.py
To view the data being received from Kafka:
docker exec -it kafka-consumer-1 /bin/bash
python /app/consumer.py
Check Spark logs to confirm that the streaming data is being processed correctly:
docker logs spark-processor-1