Tallo logoTallo logo

Junior Data Engineer Data Engineering (With Retail Domain)

Job

Alchemy Software Solutions LLC

Columbus, OH (In Person)

Full-Time

Posted 1 week ago (Updated 4 days ago) • Actively hiring

Expires 6/5/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
82
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Junior Data Engineer Data Engineering (With Retail Domain)
Location:
Columbus, Ohio (On-site / Hybrid)
Role Summary:
We are seeking a skilled Data Engineer with 5 years of hands-on experience building and maintaining robust data pipelines, data lakes, and analytical platforms. Based in Ohio, this is an individual contributor role with direct engagement with business stakeholders across Customer's Ohio-based operations. The successful candidate will own end-to-end data engineering deliverables independently, translating business requirements into scalable, production-grade data solutions that power analytics and AI/ML workloads. Key Responsibilities Design, build, and maintain scalable ETL/ELT pipelines using Python, Apache Spark, and Azure Data Factory to ingest data from diverse retail and operational sources into a centralized data lake (Microsoft Fabric / OneLake) Engage directly with Ohio-based business teams (supply chain, store operations, finance, and merchandising) to gather data requirements, understand domain logic, and translate business needs into well-defined data models and pipeline specifications Independently own the full data engineering lifecycle for assigned domains from requirements gathering and data modelling through to pipeline deployment, monitoring, and ongoing optimization Build and manage Bronze, Silver, and Gold data layers in the lakehouse architecture, applying data quality checks, schema validation, and partitioning strategies to ensure reliable, performant datasets for downstream analytics and ML teams Participate actively in agile ceremonies (sprint planning, stand-ups, retrospectives), self-manage delivery against sprint commitments, and proactively surface risks or blockers without requiring escalation Implement and enforce data quality frameworks, lineage tracking, and cataloguing standards using Microsoft Purview, ensuring datasets meet governance and compliance requirements (GDPR, CCPA) Support and contribute to Global Fulfillment and supply chain data initiatives, acting as the primary data engineering liaison for Ohio-based operational teams and ensuring timely delivery of data products that enable real-time decision-making Stay current with emerging data engineering tools, patterns (e.g. data mesh, streaming architectures), and Microsoft Fabric capabilities; apply relevant advancements to continuously improve the data platform Qualifications 5+ years of hands-on experience as a Data Engineer, with a proven track record of independently delivering production-grade data pipelines and data products in a cloud-based environment Proficiency in Python and SQL for data transformation, with hands-on experience using Apache Spark (PySpark) for large-scale batch and streaming data processing Solid understanding of data modelling concepts (dimensional modelling, star/snowflake schema, data vault) and experience building lakehouse architectures with Delta Lake or Apache Iceberg Demonstrated ability to work directly with non-technical business stakeholders gathering requirements, explaining data concepts in plain language, and iterating quickly on feedback to deliver business value Experience with version control (Git), CI/CD pipelines, and DataOps practices including automated testing of data pipelines and Infrastructure-as-Code (Terraform or Bicep) Familiarity with data governance frameworks, data cataloguing (Microsoft Purview or Apache Atlas), and implementing data quality rules and observability monitoring within pipelines Strong analytical mindset, attention to detail, and self-starter attitude comfortable driving work forward independently in a fast-paced retail technology environment with minimal day-to-day supervision
Alchemy:
Transforming Your Professional Vision into Reality Since our inception in 2013, Alchemy has been dedicated to reshaping organizational performance through innovative IT services. With a vision to empower businesses seeking a transformative edge, we ve positioned ourselves at the forefront of digitization and software modernization.
Our name reflects our mission:
to transmute technology into gold-standard solutions for our esteemed clients. We proudly serve a diverse range of sectors, including IT and ITES, BFSI, Telecom and Media, Automotive, Manufacturing, Energy, Oil and Gas, Real Estate, Retail, Healthcare, and more. With a global footprint spanning the USA, India, Europe, Canada, Singapore, Japan, and parts of Central and West Africa, we harness a unique blend of competencies, frameworks, and cutting-edge technologies. Together, we drive growth and innovation across industries, helping organizations turn their visions into reality. Alchemy Connecting Talent with Opportunities (Diversity, Equity and Inclusion)

Similar remote jobs

Similar jobs in Columbus, OH

Similar jobs in Ohio