Jobs / 
Development

Data Engineer

We’re hiring a Data Engineer to build and maintain ETL pipelines, data warehouses, and client integrations. You’ll work hands-on with SQL and Python to ensure accurate, reliable data that powers our operations and analytics. Ideal candidates have 2+ years of data engineering experience and strong skills in SQL, Python, and data pipeline development.
Lakewood, NJ
Full Time
Apply now
About Megadata
About the Role
Qualifications
What You Bring
What We Offer

Data Engineer

About the Role

We're looking for a Data Engineer with strong skills in SQL, Python, ETL processes, and data warehousing to join our growing team. You’ll be responsible for building and maintaining the data infrastructure that powers our operations, analytics, and client integrations.

This is a hands-on technical role where you’ll spend most of your time designing data pipelines, writing complex SQL queries, optimizing workflows, and ensuring the accuracy and reliability of the data that drives business decisions.

What You'll Do

  • Design, build, and maintain ETL pipelines to support data integration and transformation
  • Develop and enhance data warehousing structures to enable analytics and reporting Implement and configure data integrations with new clients, ensuring clean onboarding and accurate data flow
  • Write, optimize, and troubleshoot complex SQL queries, stored procedures, and scripts
  • Use Python to automate processes, perform data manipulation, and support integration workflows
  • Monitor and improve pipeline performance, ensuring high availability and data accuracy
  • Document data flows, mappings, and technical processes for long-term maintainability

What We're Looking For

Required Skills

SQL:

  • Strong ability to write complex queries (joins, aggregations, window functions, CTEs)

Python:

  • Experience with scripting, automation, data transformation, and working with libraries such as Pandas

ETL / Data Integration:

  • Hands-on experience designing and maintaining data pipelines

Data Warehousing:

  • Understanding of dimensional modeling, schemas, and data architecture principles

General Technical Skills:

  • Comfort working with APIs, data files, and integration workflows
  • Experience troubleshooting data quality and performance issues

Nice to Have

  • Experience with Azure, AWS, or other cloud data platforms
  • Knowledge of Airflow or other orchestration tools
  • Background with SSIS or similar ETL tools
  • Familiarity with Power BI or other BI tools
  • Exposure to DevOps concepts (Git, CI/CD)
  • Experience with performance tuning for databases and pipelines

Soft Skills

  • Strong analytical thinking and problem-solving abilities
  • Ability to translate business needs into technical specifications
  • Excellent communication skills—comfortable explaining technical concepts to non-technical users
  • Self-motivated and capable of working independently
  • High attention to detail and a commitment to data accuracy

Ideal Candidate Background

We’re looking for someone who has:

  • 2+ years of hands-on experience as a Data Engineer or similar technical data role
  • A track record of building and maintaining data pipelines and warehouse structures
  • Experience working directly with business or client stakeholders
  • The ability to ramp up quickly and be productive with minimal supervision

You might come from backgrounds like:

  • Data Engineer
  • SQL Developer transitioning into data engineering
  • ETL Developer
  • BI Developer with strong backend/ETL focus
  • Data Analyst moving into engineering

Not the Right Fit If…

This role focuses specifically on data engineering, integrations, and backend data infrastructure. If your primary background is:

  • Front-end software development
  • General IT / infrastructure without data experience
  • BI/reporting only, without ETL or SQL depth
  • Data science without engineering foundations

…this may not be the right match. We need someone who can jump in and work confidently with pipelines and data architecture.

How to Apply

Please submit:

  1. Please submit your resume to careers@megadatahs.com:
  1. Brief answers to the following:
  • Describe an ETL pipeline you built end-to-end. What made it challenging?
  • How do you ensure data accuracy and reliability in your workflows?
  • Share an example of a SQL query or Python script you wrote that solved a difficult data problem

Quick Self-Assessment

Before applying, ask yourself:
✅ Can I write complex SQL queries using joins, window functions, and aggregations?
✅ Have I built or maintained ETL pipelines before?
✅ Do I understand data warehousing and data modeling concepts?
✅ Am I comfortable writing Python scripts for data processing?

If you answered yes to at least 3 of these, we’d love to hear from you!

Why Join Megadata

  • Join a rapidly scaling tech startup in a growing healthcare sector
  • Work with a small, ambitious team with direct access to leadership
  • See your work directly impact growth across one of healthcare’s most important (and underserved) industries

Benefits:

  • Healthcare
  • Paid time off
  • Dental
  • Hybrid work environment

Location: Lakewood, NJ

Employment Type: Full-time

We are reviewing applications on a rolling basis and looking to fill this position quickly.