Data Engineer
Job Descriptions
Hiring a
Data Engineer
?
1
2
3
Data Engineer
Description
We are looking for a Senior Data Engineer to join our team. You will be responsible for the design, development, and implementation of databases, data warehouses, and data lakes. You will also be responsible for the development and deployment of data processing pipelines, ETL jobs, and API endpoints. To be successful in this role, you should have in-depth knowledge of SQL, Python, Amazon Web Services, and data modeling. Ultimately, you will ensure our databases are secure and organized, and that our data pipelines are efficient and accurate. You will also analyze data to identify trends and develop insights that will help the team make informed decisions.
Responsibilities
• As a Data Engineer at Rezi, responsible for developing and maintaining a reliable data platform that supports the business needs of the organization
• Design and develop data pipelines to ingest and transform data from a variety of sources, including third-party APIs, web services, and databases
• Ensure high availability and performance of the data platform, and proactively monitor and optimize data infrastructure
• Automate ETL processes and data quality checks, and develop data models to meet the needs of various business stakeholders
• Collaborate with data scientists, software engineers, and product managers to ensure data is accessible, accurate, and secure
• Develop and optimize data warehouse and data lake structures, and ensure data is properly and securely stored
• Develop data streaming solutions to ingest and process large amounts of data in real time
Requirements
• 5+ years of industry experience in software engineering, data engineering, or related fields
• Strong programming skills in Python or Java and knowledge of SQL
• Deep understanding of data architecture and design principles, including data governance and security
• Familiarity with data warehouse architecture and data management systems, such as Hadoop, Hive, and Spark
• Expertise in building data pipelines, ETL processes, and data transformation
• Knowledge of data visualization tools such as Tableau and Power BI
• Ability to debug and troubleshoot complex data issues
• Experience with cloud-based technologies, such as AWS or Azure
• Ability to communicate effectively with stakeholders, technical and non-technical
Skills
Data Engineer
Description
We are looking for a Senior Data Engineer to join our team and help us develop robust data infrastructure and data pipelines. The ideal candidate should have strong experience in SQL, Python and Amazon Web Services (AWS).As a Senior Data Engineer, you will be responsible for developing and maintaining our data pipelines, data warehouses and data lakes. You will also design data models and create ETL processes to ensure the accuracy and integrity of our data. Additionally, you will monitor and optimize data queries, troubleshoot data issues, and collaborate with other teams to ensure data quality and accuracy.Ultimately, you should be able to build efficient, secure and well-structured data systems that meet our business needs.
Responsibilities
• As a Data Engineer at Rezi, responsible for building data pipelines, data models, and ETL processes that support data-driven decisions across the company
• Develop data warehouses and data lake solutions to store and organize large volumes of data
• Work closely with data scientists and business teams to design and implement data models
• Create and maintain data pipelines to ensure accuracy and completeness of data
• Optimize existing data architectures to improve performance, scalability and reliability
• Develop data integration solutions to ingest data from various sources
• Utilize a variety of programming languages and data processing technologies to build and optimize data pipelines
• Troubleshoot and debug data issues and data discrepancies
Requirements
• 5+ years of industry experience in data engineering, inclusive MS or PhD in relevant fields
• Strong programming skills in Python and SQL
• Experience with data warehouse technologies such as Snowflake, Redshift, BigQuery, etc.
• Ability to design, build and maintain data pipelines and ETLs
• Expertise in data modeling, data architecture, and data access design
• Experience with cloud computing, such as AWS, GCP, or Azure
• Experience with stream processing and NoSQL databases (e.g. MongoDB, Cassandra, etc.)
• Exposure to Apache Spark, Kafka, or other distributed computing frameworks
• Ability to optimize and troubleshoot data pipelines and databases
• Ability to analyze and interpret data from various sources
• Excellent problem solving and analytical skills