Posted: 8th May 2026
Senior Data Engineer
Amplifi's Senior Data Engineer will be responsible for designing, building, and maintaining scalable, secure data pipelines and platforms that drive analytics and support operational data products. Beyond hands-on technical execution, this role carries responsibility for guiding architectural decisions, contributing to new client engagements, and mentoring others. The ideal candidate brings a strong foundation in SQL, Python, dimensional modeling, and modern data warehousing, with a deep understanding of Snowflake, Databricks, or Microsoft Fabric, and a track record of delivering solutions in complex, multi-stakeholder environments.
Amplifi is a full-service data and AI consultancy with deep expertise across multiple data disciplines throughout North America and EMEA. We work as one team to solve our clients’ most complex data challenges. While others talk about the potential of data and AI, we help organizations realize it. We partner with some of the world’s largest brands to evolve, adapt, and modernize how they operate through practical, results-driven solutions. At Amplifi, we take pride in delivering measurable value and tackling both technical and business challenges head-on. We’re looking for collaborative, growth-minded individuals who are eager to learn, grow, and mentor others as part of the Amplifi team.
Position Summary
The Senior Data Engineer will be responsible for designing, building, and maintaining scalable, secure data pipelines and platforms that drive analytics and support operational data products. Beyond hands-on technical execution, this role carries responsibility for guiding architectural decisions, contributing to new client engagements, and mentoring others. The ideal candidate brings a strong foundation in SQL, Python, dimensional modeling, and modern data warehousing, with a deep understanding of Snowflake, Databricks, or Microsoft Fabric, and a track record of delivering solutions in complex, multi-stakeholder environments.
What You Will Get To Do
- Lead the architecture, design, and optimization of enterprise-grade ETL/ELT pipelines that ingest, transform, and expose data across complex, multi-source environments.
- Define and maintain data models and warehouse layers, establishing patterns and standards that enable high-performance analytics and reporting at scale.
- Serve as a technical lead and trusted advisor to analytics, product, and engineering teams — translating ambiguous business needs into well-structured, modern data solutions.
- Write clean, efficient, and maintainable code in SQL and Python to support automation, data quality, and transformation logic.
- Own deployment and orchestration workflows using Azure Data Factory, dbt, or similar tools; drive adoption of best practices across pipelines.
- Architect and build solutions across multi-cloud environments (Azure preferred; AWS and GCP optional), integrating diverse data sources and cloud-native components.
- Lead CI/CD practices and data pipeline observability strategies, including monitoring, alerting, and incident response frameworks.
- Ensure data governance, security, and compliance in all engineering activities.
- Collaborate with data scientists and ML engineers on advanced analytics workflows within modern cloud platforms, and advise on data infrastructure requirements for AI/ML use cases.
- Provide mentorship to peers and technical guidance to non-technical stakeholders by creating visual diagrams, facilitating training sessions, and leading code reviews.
- Contribute to trust-building activities during new engagements by clarifying client expectations, providing realistic estimates, identifying hidden complexities, and building visual technology blueprints and tangible roadmaps.
What You Bring to the Team
- 7+ years of experience in a data engineering role, with at least 2 years in a senior or lead capacity preferred.
- Expert-level proficiency in SQL and Python for complex data transformation, automation, and pipeline development.
- Deep, hands-on experience with Snowflake, Databricks, and/or Microsoft Azure SQL, including performance tuning and data modeling.
- Strong expertise in cloud data platforms, preferably Microsoft Azure, with the ability to make and justify architectural decisions.
- Demonstrated experience designing and implementing modern data modeling patterns and warehouse optimization at enterprise scale.
- Experience with Azure Data Factory, and/or dbt, not just familiarity.
- Exposure to Microsoft Fabric components like OneLake, Pipelines, or Direct Lake.
- Practical experience with AWS, GCP, or hybrid cloud environments.
- Understanding of or curiosity about Dataiku for collaborating on AI and Machine Learning (ML) workflows and advanced analytics solutions.
- Manage competing priorities and communicate technical decisions clearly to both technical and non-technical stakeholders.
- Ability to work independently and with a team in a hybrid/remote environment.
Location
Wisconsin is preferred, but open to other locations.
Travel
Ability to travel up to 10% of the time.
Benefits & Compensation
Amplifi offers excellent compensation and benefits including, but not limited to, health, dental, 401(k) program, employee assistance program, short and long-term disability, life insurance, accidental death and dismemberment (AD&D), PTO program, flex work schedules and paid holidays.
Equal Opportunity Employer
Amplifi is proud to be an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or other protected characteristics.

