Hardware Engineer

EgraNew York, NY
2dRemote

About The Position

Hi, I'm Brian, Co-Founder of Egra. We just raised $5.5M to build foundation models for brain signals, and we're looking for a hardware engineer to build the devices that make it all possible. You'll have complete ownership over your work from day one. No lengthy onboarding, no waiting for permission, no navigating layers of approval. A small founding team, hard physics problems, and the resources to solve them. You'll define the hardware direction, make design decisions, and build the physical foundation of what becomes our core technology. If you thrive with high agency and want your work to directly shape the company's trajectory, this is that opportunity. What you'd be doing EEG — electrical brain activity recorded from the scalp — is one of the hardest real-world signal modalities in ML: low signal-to-noise ratio, massive subject variability, and device inconsistencies. Our ML models are only as good as the data they train on, and the data is only as good as the device that captures it. We need to build our own. As a hardware engineer, you'd be working directly with us to design and build the hardware that collects the data our models learn from. To ground it with examples, the kind of projects you'd own: Designing EEG acquisition hardware — electrode arrays (dry and semi-dry), analog front-end circuits (ADS1299 or similar), signal conditioning, and noise management. Building wearable form factors — designing devices people forget they're wearing. Think baseball caps with hidden dry electrodes, behind-the-ear rigs, or earbuds with neural sensing. Rapid iteration with 3D printing, flexible PCBs, and off-the-shelf components. Writing firmware and streaming infrastructure — embedded code that captures synchronized, timestamped EEG data and streams it reliably to our software stack. Timing precision matters — we're pairing brain signals with screen content, keystrokes, and user actions at millisecond resolution. Benchmarking existing devices — systematically evaluating commercial hardware (Muse, Emotiv, OpenBCI) against our signal quality and comfort requirements. Understanding exactly where they fall short and why. Integrating the data collection rig end-to-end — device → firmware → streaming → synchronization with action/context capture → storage. You'll be the bridge between the physical signal and the training data our models consume. This isn't a role where you design something and throw it over the wall. You'll be in the room when we look at t-SNE plots and figure out whether the signal you're capturing is actually good enough for the models to learn from. Hardware and ML are tightly coupled here. Where this is going We're building toward a world where thought is an interface. You silently compose a message and it types itself. You navigate an AR display without lifting a finger. Software adapts to your cognitive state in real time. A universal interface between human thought and digital action. The product we're building to get there has three layers: A Neural Encoder: a foundation model that maps raw EEG into robust, reusable embeddings that work across devices, subjects, and contexts A Neural API: a stable interface that any app can call: "What is the user's state?" "What intent is most likely?" "What changed?" Reference applications: proving utility and driving our data collection flywheel Near-term, the use cases are already real. A limited vocabulary of thought-to-action commands (volume, select, activate, navigate) would feel like magic to consumers. Sleep staging, stress detection, cognitive load monitoring, and engagement measurement are all feasible with today's signal quality. On the clinical side, we're pursuing avenues like epilepsy monitoring and migraine pre-emption as a wedge for high-quality data, credibility, and early revenue. Hardware matters too. No comfortable, discreet consumer device today covers the brain regions needed for language decoding. We'll eventually design our own. Think a normal-looking baseball cap with dry electrodes hidden in the brim, or something that looks more like AirPods than a medical device. The model needs to be hardware-agnostic, because the form factors will keep evolving. Research culture We have a few strong opinions about how we work: Speed over perfection. The first version of everything will be ugly. That's fine. We'd rather have a working prototype collecting real data this month than a beautiful design that ships in six months. Hardware serves the data. Every design decision is evaluated by one question: does this produce better training data for our models? Signal quality, comfort, reliability, synchronization — all in service of the ML. Internal criticism is encouraged. The fastest way to build real knowledge is to kill bad ideas early. We want people who are comfortable saying "this design won't work because..." Failed experiments are documentation, not waste. We write up what doesn't work with the same care as what does. Who we're looking for Ideally, you have direct experience designing and building biosignal acquisition hardware — EEG, EMG, ECG, or other electrophysiology. You've already learned what works and what doesn't with wearable sensing, and you won't need to rediscover those lessons. That said, if you come from a closely related hardware domain (e.g., consumer wearables, medical devices, sensor systems) and have genuine curiosity about neurotech, we're open to that too. You should have: Experience designing analog front-end circuits for biosignal acquisition (EEG, EMG, ECG, or similar) Proficiency with PCB design tools (KiCad, Altium, or Eagle) and rapid prototyping (3D printing, laser cutting) Embedded firmware development (C/C++ on ARM Cortex or similar microcontrollers) Understanding of electrode-skin interfaces, impedance, noise sources, and signal conditioning for wearable devices Ability to work fast and scrappy — modifying off-the-shelf hardware, improvising with available components, iterating daily You should NOT apply if: You need a fully equipped hardware lab and a long timeline to ship anything You're uncomfortable with ambiguity or making design decisions without complete information You see hardware and software as separate worlds — here they're deeply intertwined

Requirements

  • Experience designing analog front-end circuits for biosignal acquisition (EEG, EMG, ECG, or similar)
  • Proficiency with PCB design tools (KiCad, Altium, or Eagle) and rapid prototyping (3D printing, laser cutting)
  • Embedded firmware development (C/C++ on ARM Cortex or similar microcontrollers)
  • Understanding of electrode-skin interfaces, impedance, noise sources, and signal conditioning for wearable devices
  • Ability to work fast and scrappy — modifying off-the-shelf hardware, improvising with available components, iterating daily

Nice To Haves

  • direct experience designing and building biosignal acquisition hardware — EEG, EMG, ECG, or other electrophysiology
  • come from a closely related hardware domain (e.g., consumer wearables, medical devices, sensor systems) and have genuine curiosity about neurotech

Responsibilities

  • Designing EEG acquisition hardware — electrode arrays (dry and semi-dry), analog front-end circuits (ADS1299 or similar), signal conditioning, and noise management.
  • Building wearable form factors — designing devices people forget they're wearing. Think baseball caps with hidden dry electrodes, behind-the-ear rigs, or earbuds with neural sensing. Rapid iteration with 3D printing, flexible PCBs, and off-the-shelf components.
  • Writing firmware and streaming infrastructure — embedded code that captures synchronized, timestamped EEG data and streams it reliably to our software stack. Timing precision matters — we're pairing brain signals with screen content, keystrokes, and user actions at millisecond resolution.
  • Benchmarking existing devices — systematically evaluating commercial hardware (Muse, Emotiv, OpenBCI) against our signal quality and comfort requirements. Understanding exactly where they fall short and why.
  • Integrating the data collection rig end-to-end — device → firmware → streaming → synchronization with action/context capture → storage.

Benefits

  • Competitive salary and meaningful equity
  • Platinum-tier health insurance
  • Equipment and prototyping budget — get the tools and components you need
  • Full design autonomy: own the problem, not just a task list
  • No bureaucracy, no review committees
  • Conference budget
  • Relocation and visa support (flexible on remote)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service