-
Notifications
You must be signed in to change notification settings - Fork 999
Description
Hi @gonnet,
I’m reaching out after reviewing your recent contributions to TensorFlow Lite Micro, specifically your work on the DECODE operator enablement that appears in the October 2025 CI activity (PR #3235). This was a helpful signal for how TFLM is expanding operator coverage and improving parsing/graph‑construction reliability. 1
Who I am:
My name is DeeDee Breaux, and I’m a Talent Acquisition Business Partner at Ambiq, a semiconductor company focused on near‑zero‑power compute and deployment of ML workloads on microcontrollers. We’re currently hiring for an Edge AI ML Engineer role that is deeply aligned with the areas you’ve contributed to.
Role link: https://job-boards.greenhouse.io/ambiqmicroinc/jobs/4114732009
Why I’m contacting you:
Your work on operator enablement and integration suggests strong familiarity with TFLM’s kernel registration, op plumbing, and edge‑case handling—all highly relevant to the work Ambiq engineers do when optimizing models for ultra‑constrained hardware.
I’d value a brief conversation to explore:
- Whether the role might be of interest to you, or
- Whether you might be open to recommending others in the TFLM community with similar operator‑level, kernel‑level, or model‑loader expertise.
Some of the technical areas our team focuses on:
- Operator performance on Cortex‑M MCUs (INT8 fast‑paths, CMSIS‑NN integrations)
- Memory planner and interpreter behavior under extremely tight RAM/Flash budgets
- Expanding or optimizing operator support in TFLM pipelines (similar to your DECODE work)
I’m starting the conversation here because TFLM’s documentation indicates that GitHub Issues are the correct entry point for contacting maintainers and contributors. 2
If you’d be open to a brief discussion, I’d be glad to share more details. Referrals are also warmly appreciated.
Thank you again for your contributions to TFLM.
— DeeDee Breaux
Talent Acquisition Business Partner, Ambiq