I study machine cognition through adaptive inference, treating neural reasoning as a governed flow through latent geometry that can be monitored and regulated at operational timeframes. My research focuses on equipping foundation models with zero-overhead, inference-time control mechanisms that evaluate the local geometric stability of ongoing reasoning to adaptively modulate computational depth under severe distributional shift. By integrating principles from nonlinear dynamics, robust control, and systems neuroscience, I design architectures that intrinsically respect structural invariants and maintain systemic homeostasis. The goal of this work is to engineer metabolically regulated, event-critical cognitive systems capable of reliable deployment in high-stakes operations where state transitions are irreversible and standard search heuristics benefit from intrinsic structural safeguards.
Machine Learning: Adaptive and inference-time scaling, invariant representation learning, foundation model internals, structured multi-process coordination.
Mathematics: Nonlinear ODEs and discrete-time dynamical systems, Lyapunov stability theory, geometric analysis of latent-state trajectories, constraint projections.
Systems Engineering: Closed-loop inference controllers, trajectory-level logging and stability benchmarking, diagnostic instrumentation, automated simulation pipelines.
Software and Tooling: PyTorch, NumPy, SciPy, Hugging Face ecosystem, FastAPI, Jupyter, Linux, Bash.