Data Engineer Position Available In Miami-Dade, Florida
Tallo's Job Summary: The Data Engineer position is a 6+ month contract with a possibility of extension, based in Plano, TX but remote. Responsibilities include designing and developing applications, collaborating with data scientists, building data pipelines using AWS services, and ensuring data quality. Required skills include expertise in data engineering, Databricks, Py Spark, SQL, and AWS services. Contact dtomar@judge.com for more information.
Job Description
Location:
REMOTE Description:
Job Title:
Data Engineer Type:
6+ Months Contract , Possibility to an extension Work Locations – Plano , TX (Remote)
Overview:
10+ years of work experience with expertise in data engineering and enterprise data warehousing Good experience with Databricks (Py spark), ETL (Talend and Informatica Cloud), programming (Python), AWS, SQL (Azure
SQL, SQL
Server, Teradata, Redshift, and PostgreSQL), scripting (Unix, PowerShell), data modeling, and good business acumen.
Job Duties:
Design, develop, and maintain scalable and robust applications using Databricks, Python, SQL and various AWS technologies. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Optimize and troubleshoot existing data pipelines for performance and reliability. Ensure data quality and integrity across various data sources. Implement data security and compliance best practices. Monitor data pipeline performance and conduct necessary maintenance and updates. Document data pipeline processes and technical specifications.
MANDATORY SKILLS
10+ years of experience in data engineering, data modeling & data warehousing. Experience and proficiency with Databricks and Py Spark. 10+ years of experience on SQL skills and experience with relational databases (Azure
SQL, SQL
Server, Teradata, Redshift, and PostgreSQL) Strong experience of data warehousing concepts and ETL processes (Talend and Informatica Cloud) Experience using software and tools including big data tools like Kafka, Spark, and Hadoop Experience with scripting within Unix/Linux/CentOS – Unix, PowerShell, Perl, Python, Regular Expressions Hands-on experience with AWS services including S3, Lambda, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python. Experience with batch job scheduling and managing data dependencies. Excellent problem-solving and analytical skills.
BASIC QUALIFICATIONS
Bachelor’s degree in computer science, Engineering, MIS, or a related field.
NICE TO HAVE
Experience with AWS Big Data services like Amazon EMR and Kinesis. Familiarity with data visualization tools such as Tableau or Power BI. Knowledge of containerization technologies like Docker and Kubernetes. By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively “Judge”) to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge’s Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact:
dtomar@judge.com This job and many more are available through The Judge Group. Find us on the web at www.judge.com