Return To Job Search

Principal Data Engineer
CA
Irvine, CA OR Irving, Texas
Permanent
JN -112020-3446

Job Description:

Principal Developer Job Description

We are looking to hire an experienced Principal Software Engineer to modernize all existing software. The Principal Software Engineer’s responsibilities include modernization, improving the functionality of existing software, and ensuring that the design, application, and maintenance of software meets the quality standards. You should also be able to mentor, guide and train other engineers.

To be successful as a Principal Software Engineer, you should be able to evaluate the user’s needs, time limitations and system limitations when developing software. A stand-out Principal Software Engineer is one who is up to date on new technologies and software development practices.

Principal Software Engineer Responsibilities:

  • Designing, coding, and debugging software.
  • Improving the performance of existing software.
  • Creating new design patterns and modernizing existing software applications
  • Providing training to other engineers.
  • Maintaining and upgrading existing software.
  • Recommending new technologies that can help increase productivity.
  • Supervising and overseeing the technical aspects of projects.
  • Investigating software-related complaints and making necessary adjustments to ensure optimal software performance.
  • Regularly attending team meetings to discuss projects, brainstorm ideas, and put forward solutions to any issues.
  • Work closely with other data and analytics team members to optimize the company’s data systems and pipeline architecture
  • Design and build the infrastructure for data extraction, preparation, and loading of data from a variety of sources using technology

Job Qualifications:

Principal Software Engineer Requirements:

  • Bachelor’s degree in Computer Engineering/Computer Science or related field.
  • Strong analytical skills.
  • Good communication skills.
  • Excellent organizational and leadership skills
  • Proven experience in software development methodologies.
  • Proven experience building complex systems.
  • Experience working with and extracting value from large, disconnected and/or unstructured datasets.
  • Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
  • Strong interpersonal skills and ability to project manage and work with cross-functional teams
  • Advanced working SQL knowledge and experience working with NOSQL and relational
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Experience with the following tools and technologies:
    • Hadoop, Spark batch, Hive, Presto, Kafka, Pivotal Cloud Foundry
    • Relational SQL and NoSQL databases
    • GCP cloud services such as DataProc, GCE, GKEetc.
    • Stream-processing systems such as Storm and Spark-Streaming
    • Object-oriented/object function scripting languages such as Python, Java, Scala.