Tallo logoTallo logo

BigData ,Python and Py Spark , GCP

Job

Tata Consultancy Services

Phoenix, AZ (In Person)

$100,000 Salary, Full-Time

Posted 1 week ago (Updated 6 hours ago) • Actively hiring

Expires 6/13/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
83
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

BigData ,Python and Py Spark ,
GCP Roles and Responsibilities:
Develop and maintain data pipelines using BigData processes Focus on ingesting, storing, processing, and analyzing large datasets Required skills and qualifications: Design, develop, and maintain scalable ETL/ELT pipelines using PySpark, Airflow, and GCP-native tools. Build and optimize data warehouses and analytics solutions in BigQuery. Implement and manage workflow orchestration with Airflow/Cloud Composer. Write complex SQL queries for data transformations, analytics, and performance optimization. Ensure data reliability, security, and governance across pipelines. Conduct performance tuning and cost optimization of BigQuery and PySpark workloads. Collaborate with analysts and product teams to deliver reliable data solutions. Troubleshoot, debug, and resolve production issues in large-scale data pipelines. Contribute to best practices, reusable frameworks, and automation for data engineering. 5+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be a addon Expert on Distributed ecosystem Hands-on experience with programming using Python Expert on Hadoop and Spark Architecture and its working principle Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data Experience in UNIX shell scripting Ability to design and develop optimized Data pipelines for batch and real time data processing Should have experience in analysis, design, development, testing, and implementation of system applications Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows. Salary Range•$90,000•110,000 a year Location Phoenix, AZ Job Function
TECHNOLOGY
Role Engineer Job Id 410774 Desired Skills Big Data Salary Range $90,000•110,000 a year

Similar remote jobs

Similar jobs in Phoenix, AZ

Similar jobs in Arizona

  • Job

    Any Position

    Costco Wholesale Corporation

    Tolleson, AZ

    Posted1 day ago

    Updated6 hours ago

  • Job

    CONTROLLER

    Alex's Tires, Incorporated

    Nogales, AZ

    Posted1 day ago

    Updated6 hours ago

  • Job

    Northern Arizona Healthcare

    Cottonwood, AZ

    Posted1 day ago

    Updated6 hours ago

  • Job

    In Home Caregiver

    DJC Rock Home Care

    Prescott Valley, AZ

    Posted1 day ago

    Updated6 hours ago

  • Job

    Java Developer

    Spectraforce Technologies Inc

    Phoenix, AZ

    Posted1 day ago

    Updated6 hours ago