Tallo logoTallo logo

Backend Data Engineer

Job

Virtasant

Austin, TX (In Person)

Full-Time

Posted 2 days ago (Updated 7 hours ago) • Actively hiring

Expires 6/9/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
83
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

About Virtasant Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control. Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale. About the role Our client is a US company that specializes in online real estate marketplaces, making a difference in the way people buy and sell their homes. For them, we're looking for a Backend Data Engineer to build data pipelines and support our client's engineering team across their two real estate software platforms. You need to be based in Latin America. In this role, you'll be designing pipelines and data stores to manage terabytes of real estate data. This role will allow you to define and refine the overall data strategy for a growing tech company. Location
  • Latin America Working hours
  • 8 hours a day, Mon-Fri, with an overlap with US Central Time 10 AM
  • 4 PM Contract duration
  • 6 months to start, with indefinite 6-month extensions from there onward.
What You'll Do Design, build, and maintain our client's data pipelines and data warehouses for user-facing features, analytics, and aiding business intelligence. Develop ETL ecosystem tools using tools like Python, External APIs, Airbyte, Snowflake, dbt, PostgreSQL, MySQL, and Amazon S3. Define automated solutions to solve complex problems and better understand our clients' data, users, and market. Assist the overall engineering team with database design and data flows Develop engineering best practices, documentation, and process flows that facilitate collaboration and knowledge transfer. Why This Role is Exciting You will be able to lead the way in identifying, scoping, building, and deploying new data solutions (pipelines, warehousing, and other infrastructure). It's an awesome match for those who are self-motivated and curious, with the desire to initiate and own projects from start to finish. It's ideal for a team player who enjoys collaborating across teams to develop solutions that are the best fit for the products and their peers. The challenge of managing high volumes of data makes this role challenging and interesting! What We're Looking For Must-Have Experience 2 to 5 years of experience working as a Data Engineer or Backend Engineer Expert-level understanding of SQL-based database design and usage Expert-level proficiency in at least one programming language (Ideally Python , but PHP or JavaScript could work too) Professional experience in high-volume ETL systems. Effective communication with engineering peers, project managers, and business stakeholders. Nice-to-haves An advanced degree in Computer Science, Analytics, or a related field Hands-on experience building data solutions in at least one public cloud environment . We use AWS . Experience using Python or dbt in a data role Experience with a Business Intelligence tool like Tableau, Qlik, or Amazon QuickSight. What Success Looks Like First 30
Days:
Successfully reading from external APIs and writing to internal databases Gaining familiarity with existing data pipelines, schemas, and tooling By 90
Days:
Improving and maturing pipelines to process event streams and dynamically update data products Contributing to data architecture decisions across multiple teams By 6
Months:
Owning multiple data streams that cross team boundaries Playing a key role in shaping data strategy, reliability, and scalability across platforms Our recruitment process Recruiter interview (30 min) Technical Interview (30 min) Interview with the client's Director of Engineering (60 minutes) Take-home coding exercise (60 minutes, maximum) Interview with a small panel of leaders from the client, including the Director of Product We strive to move efficiently from step to step so that the recruitment process can be as fast as possible. What we offer Totally remote, full-time (40h/week) Work hours
  • 8 hours, with US Central time overlap (10 AM
  • 4 PM) Independent contractor agreement (after the first 6-month trial, it's a long-term, no-end-date contract) Payment in USD, biweekly or monthly
  • your choice

Similar remote jobs

Similar jobs in Austin, TX

Similar jobs in Texas