Job Description

Job Title: Data Engineer (Python)

Job Overview: We are looking for a skilled and motivated Data Engineer with expertise in Python to join our growing team. As a Data Engineer, you will play a crucial role in designing, building, and maintaining robust data pipelines and infrastructure. The ideal candidate should have a strong background in data engineering, ETL processes, and data modeling, along with proficiency in Python programming.

Key Responsibilities:

  1. Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines for processing and transferring large volumes of data.

  2. ETL Processes: Develop and optimize Extract, Transform, Load (ETL) processes to ensure accurate and timely data ingestion into the data warehouse.

  3. Data Modeling: Design and implement effective data models to support the needs of data analysts, scientists, and other stakeholders.

  4. Data Integration: Collaborate with cross-functional teams to integrate data from various sources and ensure data consistency and accuracy.

  5. Python Programming: Utilize Python to write and maintain code for data processing, automation, and integration tasks.

  6. Data Warehousing: Work with data warehousing solutions (e.g., Snowflake, Redshift) to organize and store data effectively.

  7. Performance Optimization: Identify and implement optimizations for data processing and storage to enhance overall system performance.

  8. Monitoring and Maintenance: Establish monitoring processes to ensure data quality and proactively address any issues. Perform routine maintenance and troubleshooting tasks.

  9. Documentation: Create and maintain technical documentation for data pipelines, ETL processes, and data models.

  10. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and provide solutions.


  • Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
  • Proven experience as a Data Engineer, with a focus on Python programming.
  • Strong understanding of ETL processes, data modeling, and database concepts.
  • Proficiency in Python for data processing, automation, and scripting.
  • Experience with data warehousing solutions (e.g., Snowflake, Redshift).
  • Familiarity with Big Data technologies (e.g., Hadoop, Spark) is a plus.
  • Knowledge of SQL and database systems (e.g., PostgreSQL, MySQL).
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Preferred Skills:

  • Familiarity with cloud platforms (AWS, Azure, GCP).
  • Experience with containerization (Docker) and orchestration (Kubernetes).
  • Knowledge of version control systems (Git).
  • Understanding of Agile/Scrum methodologies.


Apply Now

Job #:
Position Type:
Charlotte, NC

Apply Now