About Compass Analytics
At Compass Analytics, learning is a lifelong adventure; your career here will be no different. Our team prides itself on being a group that delights our customers, tests limits and pushes each other to be the best version of ourselves. Therefore, we look for our candidates' correct values and potential first and foremost.
Based in Montréal, Québec, Compass Analytics comprises talented data professionals who support our clients' data transformation initiatives. Our clients lean on us to bring their data ideas into reality.
About the Role
Compass Analytics is excited to recruit a talented and driven Data Engineer to join our growing Analytics Delivery team. The ideal candidate will have a passion for building data solutions, including pipeline development, implementing DataOps best practices, and collaborating with stakeholders.
Key Responsibilities
Under the guidance of our Leadership Team & Project Managers, the Data Engineer will be responsible for the following tasks:
- Collaborate with stakeholders to design and implement scalable and efficient data architecture, including table structures, lineage, and data flow across layers.
- Conduct data audits on source systems to document and address potential quality issues.
- Build and maintain data ingestion and transformation pipelines to ensure reliable and timely data delivery.
- Develop and implement production-ready pipelines with a focus on DataOps best practices for monitoring, orchestration, and automation.
- Establish and implement data governance standards, ensuring data availability, usability, integrity, and security.
- Create and maintain comprehensive technical documentation for all completed work.
- Support AI and Machine Learning projects as well as other projects within the scope of the assigned mandate (e.g., dashboarding or automation).
Technical Capabilities
- Programming Languages: Proficiency in SQL and Python is essential; experience with Scala, Spark, or other languages is a plus.
- Platforms: Knowledge of platforms such as Databricks, Snowflake, AWS, Azure, or Google Cloud.
- DataOps: Familiarity with CI/CD pipelines and tools such as GitHub for version control and deployment.
- Collaboration Tools: Experience with JIRA and Confluence is a plus.
Minimum Requirements
- Education: Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related field.
Preferred Qualifications
- Problem Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Organized & Rigorous Approach: Strong organizational skills and attention to detail, ensuring rigor in development and accuracy in outputs.
- Communication: Excellent verbal and written communication skills, with the ability to collaborate effectively across teams.
How to Apply
Step 1: Application Portal
If you’re excited about leveraging your skills to create impactful data solutions, we’d love to hear from you!
Please submit your resume, cover letter, and any portfolio assets through our website’s careers section.
Step 2: Leadership Interviews – 2 Rounds
Selected candidates will be invited to interview (read: have a conversation) with William Chan and Scott Carr.
- Duration: 30–60 minutes
- Content: You will present your experiences and previous work deliverables/projects.
- Purpose: An opportunity for us to get to know each other better!
Step 2a: Technical Interview – If Necessary
If required, selected candidates will be invited for a technical interview.
- Timeline: Candidates will have 7 days to complete the interview.
Step 3: Formal Offer
The selected candidate will receive a verbal and written offer via email.