Releases: GiorgosXou/NeuralNetworks
✨ NeuralNetworks
- ⚙️ Improved:
-
- Performance of
USE_INT_QUANTIZATION, removed unnecessary repetitions ofMULTIPLY_BY_INT_IF_QUANTIZATION965c08d
- Performance of
Important
In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me. Update 26/2/2026 They rejected me [...]
✨ NeuralNetworks
⚠️ Changed:-
- Terminology related to SIMD, corrected to DSP-acceleration thanks to @oursland #16 (reply in thread) e3bdead
Important
In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me.
✨ NeuralNetworks
- 🛠️ Fixed:
-
- (Undocumented)
ENABLE_SINGLE_TIMESTEP_THRESHOLDwhen used withGRUorLSTMLayers 30b83cf
- (Undocumented)
-
USE_PROGMEMmacro-logic effecting non-AVR_PROGMEM_LOGICfor bothUSE_INT_QUANTIZATION&USE_64_BIT_DOUBLE0f77ddd
-
- Examples acad3d6
- ✨ Added:
-
RAM_EFFICIENT_HILL_CLIMB_WITHOUT_NEW-support forUSE_EXTERNAL_FRAMorUSE_INTERNAL_EEPROM3e9ca07
- ⚙️ Improved:
Important
In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me.
✨ NeuralNetworks
- 🛠️ Fixed:
-
- Removed file closing from
save()to avoid inconsistency with external file management 0cc7830
- Removed file closing from
- ✨ Added:
-
REDUCE_RAM_RESET_STATES_BY_DELETION_4_OPTIMIZEvia0B138c780f
-
DISABLE_NN_SERIAL_SUPPORT-macro just in case 7ce4c22
- ⚙️ Improved:
-
- Misleading macro-logic that could potentially mess up hill-climb-alternatives in the future f377f87
-
- examples 123a187
⚠️ Changed:-
- Macro
As__No_Common_Serial_SupportrenamedAS__NO_COMMON_SERIAL_SUPPORT64782b4
- Macro
✨ NeuralNetworks
- 🛠️ Fixed:
-
- (v3.1.7)
REDUCE_RAM_DELETE_OUTPUTSunproperNULL-ification of last-layer's outputs, resulting in potential undefined behavior. e92d3dc
- (v3.1.7)
-
- A closing-parentheses related to
Softmaxsomewhere...
- A closing-parentheses related to
-
- Few compiler warnings 72ad515
- ✨ Added:
-
- DENSE+RNN support a5b5467
-
REDUCED_SKETCH_SIZE_DOT_PRODin favor of unimplemented optimization-bit0B000010bd8a1f
-
- New examples 399d0b5
- ⚙️ Improved:
-
- Replaced unimplemented
0B001optimization-bit in favor ofDISABLE_SIMD_SUPPORT7ee208f
- Replaced unimplemented
-
- Safety, by adding a macro-condition that checks if MCU supports 64bit\8byte
double-precision 9e7514e
- Safety, by adding a macro-condition that checks if MCU supports 64bit\8byte
-
AlphaSELU&LamdaSELUprecision 8dbf5ab
⚠️ Changed:-
- Library is now under the GPLv3 LICENSE a5e9b36
-
- FS-support now gets enabled via
_OPTIMIZE_3 0B00000010fd20e89
- FS-support now gets enabled via
Note
Special thanks to Vibhutesh Kumar Singh for using my library for his latest paper "Memory-Efficient Neural Network Deployment Methodology for Low-RAM Microcontrollers Using Quantization and Layer-Wise Model Partitioning", our excellent collaboration led to the actual support for int-quantization [...] Moreover, I'd like to thanks kritonix.ai - a startup company - for giving me the motivation to continue developing this library, with the promise of bringing me on board once the company stabilizes. Last but not least I'd like to thanks Jiajun Guan for also using the library for his Master's Thesis "Neural Network for Monitoring Infant Feeding Process in the SmartBottle Device"
🐜 NeuralNetworks
- 🛠️ Fixed:
-
- 1d140d2
REDUCE_RAM_DELETE_OUTPUTSunproperNULL-ification of last-layer's outputs, resulting in potential undefined behavior.
- 1d140d2
Note
Anouncement. tl;dr: A few weeks ago, a company specializing in embedded AI offered me an opportunity to join their team. In return, the development of this library would become private under their ownership. What's your opinion?
🐜 NeuralNetworks
- 🛠️ Fixed:
-
- 0c295c8
(NOT)REDUCE_RAM_WEIGHTS_COMMONmemmory-leak in the destructor, effecting SD-load()& not-pretrained-NNs
- 0c295c8
-
- a4a566f
MULTIPLE_BIASES_PER_LAYERPotential memmory-leak in the destructor, effecting SD-load()& not-pretrained-NNs, due to undefined-behavior
- a4a566f
-
- aa39e38 Potential undefined behavior in destructor or
load()whenREDUCE_RAM_DELETE_OUTPUTSis used withSUPPORTS_SD_FUNCTIONALITY
- aa39e38 Potential undefined behavior in destructor or
-
- 86c9e59 Expected ';' error-typo, effecting
FeedForward_Individualfor EEPROM or FRAM whenNO_BIAS&&ACTIVATION__PER_LAYER
- 86c9e59 Expected ';' error-typo, effecting
- ✨ Added:
-
- 38908de Support backpropagation for NNs that don't utilize hidden-layers. (
SUPPORT_NO_HIDDEN_BACKPROP)
- 38908de Support backpropagation for NNs that don't utilize hidden-layers. (
-
- 93e7baa
HILL_CLIMB_DYNAMIC_LEARNING_RATES-optimization to allow user-changes in learning-rate(s) duringHillClimb
- 93e7baa
- ⚙️ Improved:
-
- 738491f
preLgammalogic
- 738491f
-
- 01f5244 Constructor, via deledation
-
- 00a6e8f
GELUviaerf()improvements
- 00a6e8f
-
- e162b40
FeedForward_IndividualwhenUSE_INTERNAL_EEPROMorUSE_EXTERNAL_FRAM
- e162b40
⚠️ Changed:-
- 4e9d595 Added
#errormessage whenESP32is used withF_MACRO-optimization
- 4e9d595 Added
-
- 1080b87 Added
#errormessage whenESP32is used withUSE_PROGMEM-optimization
- 1080b87 Added
-
- 4f0cf1e Fixed Embarrassing typo
LeakyELU->LeakyReLU(appropriate#errorgets thrown, so don't worry)
- 4f0cf1e Fixed Embarrassing typo
Note
Anouncement. tl;dr: A few weeks ago, a company specializing in embedded AI offered me an opportunity to join their team. In return, the development of this library would become private under their ownership. What's your opinion?
🐜 NeuralNetworks
- 🛠️ Fixed:
-
- a2707b8 Crucial
Softmaxissue: when used withACTIVATION__PER_LAYERand notALL_ACTIVATION_FUNCTIONS
- a2707b8 Crucial
-
- 7b52a56 Potential issue with
CATEGORICAL_CROSS_ENTROPY&BINARY_CROSS_ENTROPYwhen youUSE_64_BIT_DOUBLEwithREDUCE_RAM_WEIGHTS_LVL2
- 7b52a56 Potential issue with
- ✨ Added:
-
- 64dcb9d FRAM Examples
-
- c572333 Example for SD migration to v3.0.0
-
- dab6f55 Support for NN execution (partially) via external FRAM
- ⚙️ Improved:
-
- a03b160 Unnecessary
me->i_j++logic
- a03b160 Unnecessary
-
- 26d2f20 Logic related to
intandunsigned int
- 26d2f20 Logic related to
-
- 2912a86 Unnecessary EEPROM-logic effecting sketch size
-
- 6432955 Backpropagation algorithm, cutting flash memory usage by up to 200 bytes.
-
- 9ac2b51 Prioritized "reduced-logic" over performance at
FeedForward_Individual()
- 9ac2b51 Prioritized "reduced-logic" over performance at
⚠️ Changed:-
- d4ce5e0 Optimized SD
load()&save()
- d4ce5e0 Optimized SD
Warning
load() & save() previous implementations (although perfectly working) had significant design flaws, but the 3.0.0 release brings a much-improved versions of them. Note the breaking change! I’ve included a clear migration guide to help easily convert old NN-files to the new format via just a simple sketch. Alternatively, I'm providing limited backwards compatibility through save_old() and load_old(). However, please note that these legacy methods won't receive further updates or improvements over time.