m*****1
About Candidate
Phone : 07881117945
MANIKANTAP
Email ID: manikantap801@gmail.com
Career Summary
Ipossess6yearsofexperience in data engineering and analysis, with expertise in big data ecosystems,
cloud platform and DWH solutions across multiple industries and domains.
Skilled in designing scalable, secure, and high-performance data architectures that adhere to industry
standards, utilizing cloud-native technologies to address diverse business needs.
Extensivehands-on experience with key Azure services such as ADF, Databricks, Synapse Analytics, ADLS,
and Delta Lake for developing scalable data pipelines, optimizing data storage solutions, and integrating
DevOps for efficient data processing and analytics.
Strongexpertise in data modeling and database architecture, designing conceptual, logical, and physical
data models using ER/Studio and Erwin for OLTP and OLAP systems.
Proficient in data visualization and reporting using Tableau, Power BI, Alteryx, and SSRS, enabling data
driven decision-making and insightful business analytics.
Technologies
Data Analytics Data Engineer
Data Storage
Languages
Other skills
SSRS, Power
BI, Tableau,
Looker, Google
Analytics
ADF, Databricks,
Fabric, SSIS,
Synapse
MSSQL, Oracle,
RedShift, ADLS,
Cosmos DB
SQL, Python,
Java, PySpark,
DAX, MDX,Big
Query
DevOps, SSMS, Jira, DBT, Git, Event
Hub, Jenkins, Bamboo, Selenium,
Appium, Mongo DB
Certifications
MicrosoftCertified Azure Data Engineer Associate:DP-203
AWSCertified Big Data: DAS-C01
CMILevel7Certificate in Strategic Management and Leadership Practice from Chartered Management
Institute, U.K
Education
Master’s Degreein Data Science from Coventry University.
BachelorsDegree in Electrical Engineering from J.N.T.U University.
Professional Work Experience
Organization: Essure Technologies
Duration: Apr 2024- Till Date
Roles & Responsibilities:
Workedclosely withkeystakeholders to gather requirements and design data models for data warehouses
and ETL pipelines, ensuring alignment with business objectives.
Developedefficient data processing workflows in Databricks, leveraging Spark for large-scale data
transformation and processing, enabling seamless handling of complex datasets.
Ensuredhighdataquality by implementing robust validation, cleansing, and transformation techniques in
ADFandDatabricks, maintaining accuracy and consistency in critical passenger and flight data while
meeting regulatory reporting standards.
Designedandimplemented database solutions using Synapse Analytics, ADLS facilitating efficient data
storage and processing for business intelligence reports and analytics
Developedandautomatedunit tests and participated in integration testing for ETL pipelines, ensuring data
quality and robust error handling during production deployments.
CreatedPowerBI,SSRS reports using Report Parameters, Drop-Down Parameters, Multi-Valued Parameters
Debugging Parameter Issues, Matrix Reports and Charts.
Implementversion control systems and continuous integration/continuous deployment pipelines for data
engineering projects to ensure proper collaboration & automated testing.
Environment: SQL, Python, ADLS, Databricks, Synapse, ADLS, DevOps, Functions, Power BI
Organization: Value Momentum
Duration: Jan2022- Dec 2022
Roles & Responsibilities:
OptimizedSparkjobs for complex data transformations, aggregations, and processing across large datasets
by leveraging Spark’s distributed computing capabilities.
Automateddatapipelines in ADF using various triggers and scheduled workflows, minimizing manual
efforts and ensuring timely updates to operational data for improved efficiency.
Established data lineage and metadata management strategies using ADF, Databricks, ADLS, and Delta Lake
to track data movement and transformations, enhancing data governance.
Integrated ADLS with Power BI and Azure SynapseAnalytics to develop interactive and dynamic
dashboards, enabling self-service analytics and data-driven decision-making.
Designedcompelling storytelling reports using Power BI Desktop, Power Query, DAX, and interactive
features, incorporating slice-and-dice capabilities for deeper insights.
Implementeddatagovernanceframeworks using Azure KeyVault for secure credential management and
leveraged Azure Purview to maintain data consistency and compliance.
Environment: SQL, Python, PySpark, ADLS, Databricks, Synapse, Fabric, ADLS, DevOps, Functions
Organization: Wipro Technologies
Duration: Aug 2021- Dec 2021
Roles & Responsibilities:
Conductedworkshops, and requirementgathering sessions with stakeholders, and users.
CreatedandimplementedSynapse Pipelines using Linked Services, Datasets, and Pipelines to ETL data
from various sources like Azure SQL, Blob Storage, ADLS, and developed Databricks ETL pipelines using
notebooks, Spark Data Frames, Spark SQL, and Python scripting.
Developeddataprocessing tasks using PySpark such as reading data from external sources, merging data,
performing data enrichment, and loading into target data destinations.
AutomatedADFpipelines using various triggers, created LogicApps task to send email notifications about
pipelines run fails, monitored, and pipelines, triggers, linked services.
OptimizedDatabricks, and Synapse performance using PySpark by tuning performance settings, such as
batch interval time, level of parallelism, and memory configurations.
Developedevent-driven architectures using Event Grid, integrating with Cosmos DB to enable real- time
data processing and seamless communication between distributed systems
Environment: SQL, Python, PySpark, ADLS, Databricks, Synapse, Fabric, ADLS, SSIS, Cosmos DB
Organization: ZenQ ( Qualitest )
Duration: Feb 2018- Jul 2021
Roles & Responsibilities:
Collaborated with business stakeholders to comprehend reporting needs and requirements, ensuring the
alignment of BI, ETL and data storage solutions with business goals.
Enhancinganddeploying the SSIS packages from development server to production server and scheduled
jobs for executing the stored SSIS packages.
ConfiguredSQLmailagent for sending automatic emails when SSIS packages fails, succeeds.
Engineeredinteractive Power BI reports with drill-down capabilities and custom visuals, significantly
amplifying data exploration and user engagement.
Architected a logical dimensional model for a new data mart using Erwin, and Kimaball methodology for
DWHdesign, facilitating enhanced data organization and accessibility.
Designanddevelopedfact and dimension tables with varying granularity levels to populate the data mart
from the existing data warehouse, optimizing data structure.
CreatedDataMartsandmulti-dimensional models like Star schema and Snowflakeschema.
Environment: SSIS, SSAS, SSRS, Power BI, Tableau, Snowflake, SQL, SAP BO, SAP BE
Organization: ZenQ ( Qualitest )
Duration: Mar 2017- Jan 2018
Roles & Responsibilities:
ValidatedPowerBIDashboards using Java, Python using open source automation tools like Selenium ,
Appium ensuring high data coverage of business metrics.
Designedandimplemented datacreation frameworks for automated test execution, generating large sets
of test data in various formats (CSV, JSON, XML) for data-driven tests.
UsedJenkins,Bamboo for continuous integration to run automated test suites as part of the CI/CD pipeline,
achieving faster and more reliable releases.
Createdandmaintained SQL scripts for validating database integrity during test execution and verifying
test data.
Provideddetailed reports and collaborated with stakeholders to ensure that all critical issues were
addressed before release in Agile Environment .
Environment: JIRA, Jenkins, Bamboo, SQL, Java, Python, Selenium, Appium, Power BI, Mongo DB