We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Data Engineer

The Friedkin Group
vision insurance
United States, Texas, Houston
1375 Enclave Parkway (Show on map)
Jan 17, 2025

Living Our Values
All associates are guided by Our Values. Our Values are the unifying foundation of our companies. We strive to ensure that every decision we make and every action we take demonstrates Our Values. We believe that putting Our Values into practice creates lasting benefits for all of our associates, shareholders, and the communities in which we live.

Why Join Us

  • Career Growth: Advance your career with opportunities for leadership and personal development.
  • Culture of Excellence: Be part of a supportive team that values your input and encourages innovation.
  • Competitive Benefits: Enjoy a comprehensive benefits package that looks after both your professional and personal needs.

Total Rewards
Our Total Rewards package underscores our commitment to recognizing your contributions. We offer a competitive and fair compensation structure that includes base pay and performance-based rewards. Compensation is based on skill set, experience, qualifications, and job-related requirements. Our comprehensive benefits package includes medical, dental, and vision insurance, wellness programs, retirement plans, and generous paid leave. Discover more about what we offer by visiting our Benefits page.

A Day In The Life
The Data Engineer I, under the leadership of the Director, IT will assist in building, maintaining, and optimizing data pipelines, data stores and data lakes using AWS and Databricks. This position will be a part of the GST Data Science and Analytics team and will be responsible for working closely with other engineers, data scientists, and analysts to ensure our data infrastructure aligns with business goals and enables data-driven decision-making. This Data Engineer will bring a strong foundation in data engineering and an eagerness to learn, grow and contribute to impactable data projects and solutions. The person in this role is motivated, detail-orientated, passionate about data and analytics, and thrives in a dynamic, fast-paced environment.

As a Data Engineer I you will:

  • Design, develop, and maintain scalable data pipelines using Databricks on AWS for efficient ingestion, processing, and storage of large datasets.
  • Work with Databricks clusters to perform data transformations and optimize performance for various data processing jobs.
  • Use AWS services such as S3, Glue, Lambda, RDS, EC2, Athena, EMR, MKS, Kinesis and SQS for efficient data handling and processing.
  • Implement ETL/ELT processes using Databricks notebooks, Spark, and PySpark, ensuring data integrity and quality.
  • Collaborate with senior engineers to automate and improve data processes and workflows for scalability, cost-efficiency, and alignment with industry best practices.
  • Collaborate with other teams, including data scientists and business analysts, to understand requirements and deliver optimized data solutions for machine learning and analytics projects.
  • Ensure the security of sensitive data by following best practices for encryption and access management.
  • Monitor and maintain the performance and availability of data pipelines. Troubleshoot issues in the data pipelines and resolve problems with data ingestion, processing, and storage.
  • Continuously explore new tools and techniques to improve efficiency and performance of data pipelines to support GST's evolving data needs.
  • Documenting data pipelines, processes, and best practices for knowledge sharing.
  • Participating in data governance and compliance efforts to meet regulatory requirements.
  • Keeping abreast of industry trends and emerging technologies in data engineering.

What We Need From You

  • Bachelor's Degree in Computer Science, Data Science, MIS, Engineering, Mathematics, Statistics or other related disciplinee from a four year college or university and 1 - 2 years of hands-on experience in data engineering with relevant experience in AWS and Databricks environment.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Preferred
  • Experience building and optimizing 'big data' data pipelines, architectures and data sets. Preferred
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred
  • Hands-on experience with AWS S3, EC2, Glue, Lambda, and RDS
  • Experience with Databricks data store and processing
  • Knowledge of ETL tools and frameworks, such as Apache Spark, and experience with data transformation
  • Strong SQL skills for querying and transforming data in both relational databases and lakehouse built in Databricks
  • Experience with data modeling, warehousing and building ETL pipelines
  • Strong programming skills (e.g., Python, PySpark, or Scala)
  • Knowledge of Delta Lake and Lakehouse Architecture on Databricks for efficient data storage and querying
  • Basic understanding of cloud computing concepts, especially with AWS, such as networking, and security
  • Familiarity with AWS data storage solutions like S3, DynamoDB, and RDS for data warehousing and structured/unstructured data storage
  • Familiarity with containerization technologies such as Docker, Kubernetes for deploying data solutions
  • Experience with CI/CD pipelines for automating the deployment of data pipelines
  • Experience with version control systems like Git for collaboration and source code management
  • Strong problem-solving skills with an attention to detail, particularly in troubleshooting data pipeline issues
  • Ability to communicate technical solutions effectively to both technical and non-technical stakeholders
  • Drives results by consistently achieving completion and closure of tasks, even the face of obstacles and setbacks
  • Adaptive personality and ability to function in a fast-paced environment
  • A problem solving attitude that can adapt to varying timelines
  • A strong attention to detail combined with the ability to prioritize tasks
  • Strong interpersonal skills; promoting healthy relationships and team dynamics
  • Certifications specific to BI technologies and Agile areas of focus are a strong advantage.

Physical and Environmental RequirementsThe physical requirements described here are representative of those that must be met by an associate to successfully perform the essential functions of the job. While performing the duties of the job, the associate is required on a daily basis to analyze and interpret data, communicate, and remain in a stationary position for a significant amount of the work day and frequently access, input, and retrieve information from the computer and other office productivity devices. The associate is regularly required to move about the office and around the corporate campus. The associate must frequently move up to 10 pounds and occasionally move up to 25 pounds.

Travel Requirements
20% Minimal travel is required for this position (up to 20% of the time and on a domestic basis).

Join Us
The Friedkin Group and its affiliates are committed to ensuring equal employment opportunities, including providing reasonable accommodations to individuals with disabilities. If you have a disability and would like to request an accommodation, please contact us at TalentAcquisition@friedkin.com. We celebrate diversity and are committed to creating an inclusive environment for all associates.

We are seeking candidates legally authorized to work in the United States, without Sponsorship.

HP125

#LI-BM1

Applied = 0

(web-6f6965f9bf-7hrd4)