At Structured, we’re building a semantic layer designed to eliminate metric inconsistency across organizations. Data fragmentation is a core issue that leads to broken decision-making and lost time. Our solution integrates directly with your cloud data warehouse (Snowflake, BigQuery, etc.), enforcing a single source of truth for business-critical metrics like CAC, MRR, and LTV. By defining metrics centrally, we ensure consistency across all analytics tools, dashboards, and teams.
My focus is on data modeling and developing robust frameworks that allow businesses to query and aggregate metrics without worrying about discrepancies or manual reconciliation. This involves creating a semantic layer on top of existing data models, abstracting complexity, and providing a well-defined, queryable metrics repository.
At Meta, I led the development of a knowledge base system used to improve the effectiveness of machine learning classifiers in Trust and Safety (Integrity). My work was primarily focused on building scalable systems for weak labeling, leveraging subject matter expertise to bootstrap new classifiers for harmful content detection.
Before Meta, I was an intern at Google and Microsoft on distributed systems like Borg and HyperV. These experiences have honed my skills in scaling infrastructure and building reliable, mission-critical software.
The problem of data integrity and semantic consistency is what drives my current work at Structured.
- Languages: Python, SQL, JavaScript
- Data Tools: dbt, Snowflake, BigQuery, Postgres
- Focus Areas: Data Modeling, Metrics Consistency, Distributed Systems, Semantic Layers
I’m currently deep in the architecture of semantic layers that allow for flexible, reliable metric computation. By abstracting the underlying data models, we make querying business metrics intuitive and error-free across various analytics platforms.
Interested in learning more about Structured or collaborating? Reach out:
Let’s discuss how we can solve data consistency challenges together.