-
Notifications
You must be signed in to change notification settings - Fork 999
Description
Hi @ddavis-2015,
I’m reaching out after reviewing your recent CMSIS‑NN work (const‑tensor safety checks across FULLY_CONNECTED, SVDF, and UNIDIRECTIONAL_SEQUENCE_LSTM, closed via PR #3229). That thread was super helpful context for how the Prepare phase interacts with tensor mutability in optimized kernels. 1
Who we are:
I’m a Talent Acquisition Business Partner at Ambiq, supporting an Edge AI ML Engineer role focused on TinyML performance on Cortex‑M MCUs (INT8 kernels, CMSIS‑NN, and memory‑footprint tuning). We deploy models under tight RAM/Flash budgets and care a lot about per‑op latency and energy.
Why I’m contacting you:
Given your hands‑on contributions to CMSIS‑NN in TFLM, we’d value a short conversation to explore either:
- Your interest in a role centered on kernel acceleration and microcontroller inference, or
- Referrals to engineers in your network with similar experience.
A few technical touchpoints we’re working on (so you can gauge fit):
- INT8 Conv/Depthwise/FC hot paths in
tensorflow/lite/micro/kernels/cmsis_nn/* - Operator coverage parity + guardrails similar to the const‑tensor checks you landed
- Profiling + memory‑planning tradeoffs in TFLM for real boards (not just emulation)
If you’re open to a quick chat (15–20 minutes), I can share details on scope, hardware, and performance targets. Otherwise, referrals would be greatly appreciated.
Thanks for your time and for your continued work in TFLM. I’ll keep all follow‑ups here per project norms, but happy to move to DM/email if you prefer. (The TFLM README indicates Issues are the primary contact path—I’m starting here to stay consistent.) 2
— DeeDee Breaux
Talent Acquisition Business Partner at Ambiq
Here'e a link to the role I'm recruiting for: