Big Data Engineer/Data Engineer

Phoenix, AZ
Contracted
Mid Level
In-Person round of interview mandatory.

Tech Stack: Big Data, Spark, Python, SQL, GCP is must.
  • Need a hardcore, heavy hitting Data Engineer who is extremely skilled and is able to function independently and manage their deliverables
  • Capable of writing ETL pipelines using Python from scratch
  • Expert in OOP principles and concepts
  • Ability to independently write efficient and reusable code for ETL pipelines
  • Expert in data modeling concepts such as schemas and entity relationships
  • Expert at analyzing and developing queries in SQL in various dialects (SQL Server, DB2, Oracle)
  • Familiarity with Airflow and understands how to develop DAGs
  • Expert in data warehouses like BigQuery, Databricks Deltalakehouse and how to programmatically ingest, cleanse, govern and report data out of them
  • Expertise in Spark.
Share

Apply for this position

Required*
Apply with Indeed
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*