Wisdom Ark
Wisdom Ark
  • Mission
  • About
  • ark-insights
  • Events
  • Services
    • ADIN
    • AI-GS
    • AIR
    • AARN
  • Careers
  • More
    • Mission
    • About
    • ark-insights
    • Events
    • Services
      • ADIN
      • AI-GS
      • AIR
      • AARN
    • Careers
  • Sign In
  • Create Account

  • My Account
  • Signed in as:

  • filler@godaddy.com


  • My Account
  • Sign out

Signed in as:

filler@godaddy.com

  • Mission
  • About
  • ark-insights
  • Events
  • Services
    • ADIN
    • AI-GS
    • AIR
    • AARN
  • Careers

Account


  • My Account
  • Sign out


  • Sign In
  • My Account

Senior Data Engineer

📍Location: New York City

wisdom-ark is looking a Senior Data Engineer to design, build, and optimize scalable data pipelines and infrastructure using AWS, Apache Spark, and Apache Flink. This role will play a key part in managing real-time and batch data processing systems while ensuring high availability, reliability, and performance. The ideal candidate should have strong DevOps experience, enabling smooth deployment, monitoring, and automation of data workflows. 


 The role's location is hybrid (3 days RTO to our NYC site).  

What You’ll Do

  • Design and develop scalable, real-time, and batch data pipelines using Apache Spark and Apache Flink on AWS.
  • Architect data lake and warehouse solutions to support analytics, machine learning, and business intelligence workloads.
  • Implement event-driven and streaming data solutions using Kafka, Kinesis, or Flink.
  • Optimize ETL/ELT workflows for performance, cost efficiency, and scalability.
  • Work on infrastructure automation and CI/CD pipelines for data workflows using Terraform, Docker, and Kubernetes.
  • Monitor and troubleshoot data pipelines, ensuring reliability and efficiency in production environments.
  • Collaborate with Data Scientists, Analysts, and Software Engineers to develop and maintain data solutions.
  • Implement best practices for data governance, security, and compliance.

What You’ll Bring

  • 5+ years of experience in data engineering, with expertise in big data processing frameworks (Spark, Flink, or similar).
  • Strong experience with AWS cloud services (S3, Glue, EMR, Lambda, Kinesis, Redshift, Athena, etc.).
  • Expertise in streaming data processing using Apache Flink or Kafka Streams.
  • Proficiency in Python, Scala, or Java for data engineering and pipeline development.
  • Experience in DevOps practices, including CI/CD pipelines, Terraform, Kubernetes, and Docker.
  • Strong understanding of SQL, distributed computing, and data modeling for large-scale systems.
  • Knowledge of monitoring, logging, and debugging tools (e.g., Prometheus, Grafana, CloudWatch).
  • Experience with workflow orchestration tools like Apache Airflow or Step Functions.
  • Familiarity with data governance, security best practices, and compliance frameworks.

Nice to Have

  • Experience with lakehouse architectures (Databricks, Delta Lake, or Apache Iceberg).
  • Hands-on experience with NoSQL databases like DynamoDB, Cassandra, or MongoDB.
  • Exposure to machine learning pipelines and feature stores.

Why Join Us?

  • Work on cutting-edge big data technologies in a dynamic and innovative environment.
  • Be part of a team that builds large-scale data platforms powering analytics and AI.
  • Exposure on big data solutions and economic domain data knowledge.
  • Competitive salary, remote-friendly work environment, and growth opportunities.

Apply Now

Ready to make your mark? If you’re excited about creating growth-driven designs that engage and retain users, we’d love to hear from you.

Attach Resume
Attachments (0)

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Copyright © 2025 Wisdom Ark - All Rights Reserved.

  • Mission
  • ark-insights
  • ADIN
  • AI-GS
  • Careers

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept