Python/Cloud Engineer||W2 Position Available In Mecklenburg, North Carolina

Tallo's Job Summary: Seeking a Python/Cloud Engineer for a W2 position in Charlotte, NC (onsite 3x a week) with a 1 year + contract. Must have Cloud Experience (AWS, Azure, or Google Cloud Platform), Kubernetes or Openshift, Python Django, Airflow, Spark, PySpark. Responsibilities include developing data pipelines, container development, optimization, and integration. Candidates should have 3+ years of Apache Spark experience, 1+ years of Django and Airflow development, and proficiency in Python, Docker, Kubernetes, and cloud platforms. Bachelor's degree in Computer Science or related field required. Hiring for a full stack Engineer role to design, deploy, and manage data processing solutions in a cloud-native environment.

Company:
1 Point System
Salary:
JobFull-timeOnsite

Job Description

Python/Cloud Engineer||W2

Role:

Python/Cloud Engineer

Location:

Charlotte, NC (onsite 3x a week)

Duration:

1 year + contract

Must Have:

Cloud Experience (AWS or Azure or Google Cloud Platform)
Kubernetes or Openshift
Python
Django (develop new API s)
Airflow
Spark
PySpark
Modernization of code
CI/CD
Need someone that can read and rewrite older code on to the cloud and openshift. If candidates don t have openshift they will consider candidates with strong Kubernetes. Candidates do not have to have Google Cloud Platform they will accept candidates with AWS or Azure. Python and Pyspark are 100% required as well because they are redoing the data pipelines in both of these.
JD for Python/Cloud Full stack engineer.
Seeking a full stack Engineer to join our team, with expertise in Cloud, Python/Spark and OpenShift. This role focuses on designing, deploying and managing scalable data processing solutions in a cloud-native environment. You will work closely with data scientists, software engineers, and DevOps team to ensure robust, high-performance data pipelines and analytics platforms.

Responsibilities:
Data Pipeline Development:

Design and implement large-scale data processing workflows using

Apache Spark Container Development:

Design and implement docker images

Optimization:

Tune Spark jobs for performance, leveraging OpenShift’s/Cloud resource management capabilities

Integration:

Integrate spark with other data sources (e.g., Kafka, s3, cloud storage) and sinks (e.g., databases, data lakes)

Qualifications:
Experience:

o 3+ years working on Apache Spark for big data processing.
o 1+ years of Django and Airflow development experience.
1+ years of Cloud development experience. (Google Cloud preferred)

Technical Skills:

Proficiency in Spark frameworks(Python/PySpark) Familiarity with Docker and Kubernetes concerts(e.g., pods, deployments, services and images) Hands on working experience on distributed systems, cloud platforms(AWS, Google Cloud Platform, Azure), and data storage solutions (e.g., S3, HDFS)

Programing:

Strong coding skills in Python, Airflow, Django; experience with shell scripting is a plus.

Education:

Bachelor’s degree in Computer Science, Engineering or related fi
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job

Dice Id:

91001743

Position Id:

8648142

Other jobs in Mecklenburg

Other jobs in North Carolina

Start charting your path today.

Connect with real educational and career-related opportunities.

Get Started