About The Position

The Analysis Solutions Business Area at Leidos has an opening for a Software Engineer to support an enterprise IT program. The selected candidate will be joining an existing team supporting extract, transform, load (ETL) workflows. This is an excellent opportunity to join a program with an exciting mission and a company which supports internal career mobility and professional development. Our client supports analysts through the provision of large datasets, methodologies, 3D models and data visualizations to address pressing intelligence questions. The client requires support to ingest, clean, store, and analyze data. For this role we need candidates with experience in extract, transform, load (ETL) workflows.

Requirements

  • Active TS/SCI with Polygraph security clearance. US Citizenship Required.
  • Bachelor’s degree and 12+ years of experience. Additional experience may be substituted in lieu of a degree.
  • Demonstrated professional experience in Computer Science, Computer Engineering, Systems Engineering, or closely related discipline.
  • Demonstrated professional experience with AWS cloud services, including long-term storage options, and cloud-based database services.
  • Demonstrated experience understanding SQL database structures and mapping them between different SQL databases.
  • Demonstrated professional experience working with Postgres.
  • Demonstrated professional experience working with large data and high performance compute clusters.
  • Demonstrated experience with API development techniques.
  • Demonstrated experience developing and deploying ETL processes for large data sets.
  • Demonstrated experience creating operating system level scripts to perform ETL operations on SQL databases.
  • Demonstrated professional experience with version control systems, preferably Git.
  • Demonstrated experience testing the development of software solutions for the extraction, transformation, and loading of data using the most efficient languages for the task such as C++, Python, and SQL.
  • Demonstrated experience implementing multiprocessing data-flows to parallelize ingest operations.

Nice To Haves

  • Demonstrated experience with Databricks and/or Hadoop.
  • Demonstrated experience with the client’s data environment.
  • Demonstrated experience exhibiting strong coordination and collaboration skills.
  • Demonstrated experience working with full-stack developers to deploy applications that leverage large data sets.
  • Demonstrated experience communicating technical concepts to non-technical audiences.

Responsibilities

  • Engage regularly with data scientists, analysts, and managers
  • Load large datasets into the client’s on-premises and cloud environments, and develop and maintain ingestion algorithms and schemas for large datasets
  • Analyze new large-volume datasets to optimize the data ingest processes, support the creation of database schemas for new data loads, and develop software tools that efficiently preprocess, modify, aggregate, load, index, and archive large data collections into clusters in near real-time
  • Ensure proper data quality and access controls are implemented, generate metrics to track data ingest statistics to maintain data integrity and provenance, and document the data-flows according to standards set by the client

Benefits

  • Employment benefits include competitive compensation, Health and Wellness programs, Income Protection, Paid Leave and Retirement.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service