Job Description
Description
Servus is growing! We are currently looking for a Data Engineering Analyst II in our Data and Infrastructure Department.
Servus Credit Union is Alberta’s largest member-owned credit union, known for building strong, resilient communities by helping our members feel good about their money. One of Canada’s Best Managed Companies for 20 consecutive years and ranked as one of the top banks in Canada on Forbes World’s Best Banks list for two years in a row, we are a team of smart, gutsy, and driven individuals.
Position Overview:
Reporting to the Data Integration & Transformation Lead, the main objective is to make substantial contributions to improving the overall maturity of the organization’s data structures, as well as their movement, scalability, and availability throughout the enterprise.
The Data Engineering Analyst II plays a key role in collaborating with the Data Technology, Governance, and Data Science teams to modernize our data architecture. This position involves designing and constructing data pipelines, leveraging the analyst’s expertise in data management, as well as cost optimization in data storage. The Data Engineering Analyst II provides support for business stakeholders, data analysts, cloud infrastructure teams, and data scientists regarding various data initiatives. The individual in this role should possess self-direction and be adept at addressing the data requirements of multiple teams, systems, and products. This position is crucial in advancing our data platforms to enhance the next generation of products and services for our members.
Key Accountabilities:
- Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
- Collaborates with stakeholders , including data analytics, accounting, operations, data science, and IT teams, to address data-related technical issues.
- Utilizes programming skills, ETL tools, and data virtualization solutions to develop and manage integration tools, APIs, pipelines, and data platform eco-systems.
- Designs and builds infrastructure for optimal extraction, transformation, and loading of data from various sources, including structured, unstructured, and big data.
- Creates data tools for analytics and data science teams to support product optimization and strategic objectives.
- Prepares data for predictive and prescriptive modeling and assists in deploying analytics programs, machine learning, and predictive models.
- Develops scalable data pipelines and builds API integrations to handle increasing data volume and complexity.
- Builds analytics tools to provide actionable insights into customer acquisition, operational efficiency, and key business performance metrics.
- Implements processes to monitor data quality, ensuring accuracy and availability for stakeholders.
- Contributes to engineering documentation and tests data ecosystem reliability.
- Leads projects to ensure timely completion and adherence to expectations, following DevOps procedures.
- Ensures alignment with strategy and direction set by the leadership and demonstrates willingness to commit to a direction and drive operations to completion.
- Analyzes data engineering trends to ensure alignment with industry best practices and continuously improve associated techniques within Servus to meet its information needs.
- Contributes to Servus culture and data maturity needs through effective communications with peers within other areas across Servus.
- Monitors data protection controls, identifies gaps, and recommends solutions in collaboration with Security, Privacy, and Risk Management groups.
- Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability.
Requirements
- Bachelor’s degree or diploma required in Computer Science, Engineering, Management Information Systems, or related field
- Experience with Cloud data platform(s), such as: Azure, Databricks, or Synapse
- ETL/ELT pipeline experience with ADF, Synapse Pipelines, DBT, or Databricks DLT
- Advanced SQL and Intermediate Python experience (5+ years)
- Data Modeling experience with schema design and dimensional data models (4+ years)
- Experience with Agile Software Development methodologies
- Experience with ML libraries and frameworks
- Strong understanding of data science concepts and advanced analytics