BigData Operations Engineer Position Available In Mecklenburg, North Carolina
Tallo's Job Summary:
Job Description
Job Description
BigData Operations Engineer Contract:
Charlotte, North Carolina, US Salary Range:
60.00 – 65.00 |
Per Hour Job Code:
362002
End Date:
2025-06-14
Days Left:
27 days, 2 hours left
Position Details:
Client:
Banking Job Title:
Big Data Operations Engineer Duration:
12-24
Months Location:
Charlotte, NC 28202 (3D in office, 2D remote)
Pay:
$60-65/hr. About the
Role:
We are seeking a Software Big Data Operations Engineer to join our team. In this role, you will consult on complex initiatives with broad impact and large-scale planning for Software Engineering.
You will review and analyze complex, multi-faceted, larger-scale, or longer-term Software Engineering challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors.
You will contribute to the resolution of complex and multi-faceted situations requiring a solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables.
Additionally, you will strategically collaborate and consult with client personnel.
Responsibilities:
Support infrastructure and troubleshooting for a Big Data setup, working with Data Lake partners to set up for new tenants. Support a Data Platform with 14+ applications.
Assist in the transition from a Big Data platform to Cloud, leveraging Cloud implementation experience.
Education Qualification:
Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent experience.
Required Skills:
4+ years of Software Engineering experience or equivalent demonstrated through work experience, training, military experience, or education.
4+ years of experience with AWS or GCP, GitHub, MongoDB, Teradata, Hadoop.
4+ years of experience in designing, coding, testing, debugging, and documenting for projects and programs.
3+ years of experience with Python & Unix/Shell scripting, Scala, Spark, HBase, Oozie, Flume, Druid, and Kafka.
3+ years of experience with Ansible, Airflow and Autosys.
1+ years of experience with Dremio and Kubernetes.
2+ years of experience in setting up monitoring dashboards through Grafana or Prometheus.
Desired Skills:
Data analytics capabilities and mindset at scale.
Ability to interact effectively at varying levels of the business and technical organization.
Experience in Financial Services is a plus but not required.
Job Requirement
Cloud Experience
MongoDB
Teradata
Hadoop
Hbase
Scala
Spark
Reach Out to a Recruiter
Recruiter
Email
Phone
Prasanna Kaskhedikar
prasanna.kaskhedikar@collabera.com