A Virginia-based services organization is currently seeking a new Data Engineer to focus on building a modern data platform utilizing Microsoft Fabric.
About the Opportunity:
- Schedule: Full-time
- Hours: Standard business
- Setting: Remote
Responsibilities:
- Develop, optimize, and maintain Fabric Data Pipelines for ingestion from on-prem and cloud sources
- Build PySpark notebooks to implement scalable transformations, merges/upserts, and medallion-lakehouse patterns
- Contribute to the design and evolution of our Fabric-based platform
- Define standards and frameworks for schema management, versioning, governance, and data quality
- Build and maintain deployment pipelines for Fabric artifacts (notebooks, pipelines, lakehouses)
- Establish environment-aware configuration and promotion workflows across Dev/QA/Prod
- Work as a peer-leader with other senior engineers and architects to shape platform strategy
- Perform other duties, as needed
Qualifications:
- 7+ years in data engineering, with proven impact in enterprise environments
- Strong hands-on expertise in Microsoft Fabric (Pipelines, Lakehouse, Notebooks, OneLake)
- Advanced PySpark skills for Data processing at scale
- Expertise in Delta Lake, Medallion Architecture, Schema Evolution, and Data Modeling
- Experience with CI/CD for Data Engineering, including Fabric asset deployments
- Strong SQL and experience with SQL Server/Azure SQL
Desired Skills:
- Experience helping launch or scale Microsoft Fabric adoption
- Familiarity with Data Governance, Lineage, and Compliance frameworks
- Knowledge of real-time/streaming data patterns
- Exposure to Salesforce, CRM, or DMS integrations
- In-depth knowledge of the Azure ecosystem (API Management, Azure Functions, etc.)



