We’re currently partnering with a leading client to fill a Application Development Principal (Data Engineer/Architect) for 12+ Months Remote contract position.
If you’re interested in exploring this opportunity, please share your updated resume with Anil at
[email protected] at your earliest convenience.
Title: Application Development Principal (Data Engineer/Architect)
Location: Remote
Duration: 12+ months Contract
Client: Healthcare company
Job Description:
Must Have Skills:
• Experience working with both business and IT leaders
• Teradata
• Databricks
• Spark/PySpark
• Person has 13-15+ years' experience that takes initiative and has command of tools...works in a consultative manner vs waiting for direction and orders.
Duties:
• Collaborate with business and technical stakeholders to gather and understand requirements.
• Design scalable data solutions and document technical designs.
• Develop production-grade, high-performance ETL pipelines using Spark and PySpark.
• Perform data modeling to support business requirements.
• Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
• Implement CI/CD pipelines to deploy code artifacts to platforms like AWS and Databricks.
• Orchestrate Databricks jobs using Databricks Workflows.
• Monitor production jobs, troubleshoot issues, and implement effective solutions.
• Actively participate in Agile ceremonies including sprint planning, grooming, daily stand-ups, demos, and retrospectives.
Skills:
• Strong hands-on experience with Spark, PySpark, Shell scripting, Teradata, and Databricks.
• Proficiency in writing complex and efficient SQL queries and stored procedures.
• Solid experience with Databricks for data lake/data warehouse implementations.
• Familiarity with Agile methodologies and DevOps tools such as Git, Jenkins, and Artifactory.
• Experience with Unix/Linux shell scripting (KSH) and basic Unix server administration.
• Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
• Hands-on experience with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
• Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management, and cloud integration (Azure/AWS).
• Proficiency with collaboration tools like Jira and Confluence.
• Demonstrated creativity, foresight, and sound judgment in planning and delivering technical solutions.
Required Skills:
• Spark
• PySpark
• Shell Scripting
• Teradata
• Databricks
Additional Skills:
• AWS SQS; Aws S3; Aws Ec2; AWS SNS; Aws Lambda; AWS ECS; Aws Glue; AWS IAM; Aws CloudWatch
• Foresight
• Sound Judgment
• SQL; Stored Procedures
• Databricks For Data Lake/Data Warehouse Implementations
• Agile Methodologies
• GIT; Jenkins; Artifactory
• Unix/Linux Shell Scripting; Unix Server Administration
• Ca7 Enterprise Scheduler
• Databricks Delta Lake; Databricks Notebooks; Databricks Pipelines; Databricks Cluster Management; Databricks Cloud Integration (Azure/Aws)
• JIRA; Confluence; Creativity
Apply Now
Apply Now