eSimplicityposted 1 day ago
Full-time
Columbia, MD

About the position

eSimplicity is modern digital services company that work across government, partnering with our clients to improve the lives and ensure the security of all Americans—from soldiers and veteran to kids and the elderly, and defend national interests on the battlefield. Our engineers, designers and strategist cut through complexity to create intuitive products and services that equip Federal agencies with solutions to courageously transform today for a better tomorrow for all Americans. We're looking for a seasoned Staff Software Engineer who has deep experience working in large scale Databricks ecosystems. This role is ideal for someone eager to explore and build tools that help move, manage, and govern large-scale data across interconnected platforms. You'll work on building web interfaces, backend services, and automated workflows that power our internal helper tools, support data mesh strategies, and manage authenticated access to distributed data environments. You'll be collaborating closely with engineers, DevOps, product owners, and data architects to rapidly prototype, build, and scale data-aware applications and infrastructure that facilitate secure and efficient data movement and integration. This position is contingent upon award.

Responsibilities

  • Leads and mentors all other data roles in the program.
  • Identifies and owns all technical solution requirements in developing enterprise-wide data architecture.
  • Creates project-specific technical design, product and vendor selection, application, and technical architectures.
  • Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options.
  • Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need.
  • Ensures strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies.
  • Develops and manages product roadmaps, backlogs, and measurable success criteria and writes user stories.
  • Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.
  • Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects.
  • Creates new pipeline development and maintains existing pipeline; updates Extract, Transfer, Load (ETL) process; creates new ETL feature development; builds PoCs with Redshift Spectrum, Databricks, etc.
  • Implements, with the support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations.
  • Assemble large, complex data sets that meet non-functional and functional business requirements.
  • Identify, design, and implement internal process improvements, including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Building required infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies.
  • Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics, including operational efficiency and customer acquisition.
  • Working with stakeholders, including data, design, product, and government stakeholders, and assisting them with data-related technical issues.
  • Write unit and integration tests for all data processing code.
  • Work with DevOps engineers on CI, CD, and IaC.
  • Read specs and translate them into code and design documents.
  • Perform code reviews and develop processes for improving code quality.

Requirements

  • All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation.
  • Minimum 10 years of relevant experience in software engineering.
  • Minimum 2 years working in large scale Databricks implementation.
  • Proficiency in at least one of the following languages: TypeScript, JavaScript, Python.
  • Proven experience working on large-scale system architectures and Petabyte-level data systems.
  • Proficient in automated testing frameworks (PyTest, Jest, Cypress, Playwright) and testing best practices.
  • Experience developing, testing, and securing RESTful and GraphQL APIs.
  • Proven track record with AWS cloud architecture, including networking, security, and service orchestration.
  • Experience with containerization and deployment using Docker, and infrastructure automation with Kubernetes and Terraform.
  • Familiarity with Redis for caching or message queuing.
  • Knowledge of performance monitoring tools like Grafana, Prometheus, and Sentry.
  • Familiarity with Git, Git-based workflows, and release pipelines using GitHub Actions and CI/CD platforms.
  • Comfortable working in a tightly integrated Agile team (15 or fewer people).
  • Strong written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders.

Nice-to-haves

  • Strong experience with modern frameworks such as React.js, Next.js, Node.js, Flask.
  • Deep knowledge of working with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB).
  • Experience working with authentication/authorization frameworks like OAuth, SAML, Okta, Active Directory, and AWS IAM (ABAC).
  • Familiarity with data mesh principles, domain-oriented architectures and experience connecting data domains securely.
  • Knowledge of event-driven architectures and systems like Kafka, Kinesis, RabbitMQ, or NATS.
  • Experience exploring or building ETL pipelines and data ingestion workflows.
  • Strong grasp of access control, identity management, and federated data governance.
  • CMS and Healthcare Expertise: In-depth knowledge of CMS regulations and experience with complex healthcare projects; in particular, data infrastructure related projects or similar.
  • Demonstrated success providing support within the CMS OIT environment, ensuring alignment with organizational goals and technical standards.
  • Demonstrated experience and familiarity with CMS OIT data systems (e.g. IDR-C, CCW, EDM)

Benefits

  • We offer highly competitive salaries and full healthcare benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service