Skip to content

Releases: GiorgosXou/NeuralNetworks

✨ NeuralNetworks

24 Feb 19:18

Choose a tag to compare

  • ⚙️ Improved:
    • Performance of USE_INT_QUANTIZATION, removed unnecessary repetitions of MULTIPLY_BY_INT_IF_QUANTIZATION 965c08d

Important

In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me. Update 26/2/2026 They rejected me [...]


Donate Discord Server

✨ NeuralNetworks

24 Feb 08:41

Choose a tag to compare

Important

In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me.


Donate Discord Server

✨ NeuralNetworks

22 Feb 18:10

Choose a tag to compare

Important

In a few days, I will be leaving home for a year to complete my mandatory military service. I don't have a choice, expect less frequent updates and slower responses from me.


Donate Discord Server

✨ NeuralNetworks

18 Dec 20:15

Choose a tag to compare

  • 🛠️ Fixed:
    • Removed file closing from save() to avoid inconsistency with external file management 0cc7830
  • ✨ Added:
    • REDUCE_RAM_RESET_STATES_BY_DELETION _4_OPTIMIZE via 0B1 38c780f
    • DISABLE_NN_SERIAL_SUPPORT-macro just in case 7ce4c22
  • ⚙️ Improved:
    • Misleading macro-logic that could potentially mess up hill-climb-alternatives in the future f377f87
    • Enabling F()-macro now simply gets ignored instead of raising an #error for ESP32 d075085
  • ⚠️ Changed:
    • Macro As__No_Common_Serial_Support renamed AS__NO_COMMON_SERIAL_SUPPORT 64782b4

Donate Discord Server

✨ NeuralNetworks

20 Nov 08:10

Choose a tag to compare

Note

Special thanks to Vibhutesh Kumar Singh for using my library for his latest paper "Memory-Efficient Neural Network Deployment Methodology for Low-RAM Microcontrollers Using Quantization and Layer-Wise Model Partitioning", our excellent collaboration led to the actual support for int-quantization [...] Moreover, I'd like to thanks kritonix.ai - a startup company - for giving me the motivation to continue developing this library, with the promise of bringing me on board once the company stabilizes. Last but not least I'd like to thanks Jiajun Guan for also using the library for his Master's Thesis "Neural Network for Monitoring Infant Feeding Process in the SmartBottle Device"


GPLv3

Donate Discord Server

🐜 NeuralNetworks

11 Apr 19:28

Choose a tag to compare

  • 🛠️ Fixed:
    • 1d140d2 REDUCE_RAM_DELETE_OUTPUTS unproper NULL-ification of last-layer's outputs, resulting in potential undefined behavior.

Note

Anouncement. tl;dr: A few weeks ago, a company specializing in embedded AI offered me an opportunity to join their team. In return, the development of this library would become private under their ownership. What's your opinion?


Donate Discord Server

🐜 NeuralNetworks

16 Mar 17:40

Choose a tag to compare

  • 🛠️ Fixed:
    • 0c295c8 (NOT)REDUCE_RAM_WEIGHTS_COMMON memmory-leak in the destructor, effecting SD-load() & not-pretrained-NNs
    • a4a566f MULTIPLE_BIASES_PER_LAYER Potential memmory-leak in the destructor, effecting SD-load() & not-pretrained-NNs, due to undefined-behavior
    • aa39e38 Potential undefined behavior in destructor or load() when REDUCE_RAM_DELETE_OUTPUTS is used with SUPPORTS_SD_FUNCTIONALITY
    • 86c9e59 Expected ';' error-typo, effecting FeedForward_Individual for EEPROM or FRAM when NO_BIAS && ACTIVATION__PER_LAYER
  • ✨ Added:
    • 38908de Support backpropagation for NNs that don't utilize hidden-layers. (SUPPORT_NO_HIDDEN_BACKPROP)
    • 93e7baa HILL_CLIMB_DYNAMIC_LEARNING_RATES-optimization to allow user-changes in learning-rate(s) during HillClimb
  • ⚙️ Improved:
    • 01f5244 Constructor, via deledation
    • 00a6e8f GELU via erf() improvements
    • e162b40 FeedForward_Individual when USE_INTERNAL_EEPROM or USE_EXTERNAL_FRAM
    • a4b0367 bc5bc47 Softmax implementation & solved (rare but) potetnial issue
  • ⚠️ Changed:
    • 4e9d595 Added #error message when ESP32 is used with F_MACRO-optimization
    • 1080b87 Added #error message when ESP32 is used with USE_PROGMEM-optimization
    • 4f0cf1e Fixed Embarrassing typo LeakyELU -> LeakyReLU (appropriate #error gets thrown, so don't worry)

Note

Anouncement. tl;dr: A few weeks ago, a company specializing in embedded AI offered me an opportunity to join their team. In return, the development of this library would become private under their ownership. What's your opinion?


Donate Discord Server

🐜 NeuralNetworks

18 Jan 07:47

Choose a tag to compare

  • 🛠️ Fixed:
    • a2707b8 Crucial Softmax issue: when used with ACTIVATION__PER_LAYER and not ALL_ACTIVATION_FUNCTIONS
    • 7b52a56 Potential issue with CATEGORICAL_CROSS_ENTROPY & BINARY_CROSS_ENTROPY when you USE_64_BIT_DOUBLE with REDUCE_RAM_WEIGHTS_LVL2
  • ✨ Added:
    • c572333 Example for SD migration to v3.0.0
    • dab6f55 Support for NN execution (partially) via external FRAM
  • ⚙️ Improved:
    • a03b160 Unnecessary me->i_j++ logic
    • 26d2f20 Logic related to int and unsigned int
    • 2912a86 Unnecessary EEPROM-logic effecting sketch size
    • 6432955 Backpropagation algorithm, cutting flash memory usage by up to 200 bytes.
    • 9ac2b51 Prioritized "reduced-logic" over performance at FeedForward_Individual()
  • ⚠️ Changed:
    • d4ce5e0 Optimized SD load() & save()

Warning

load() & save() previous implementations (although perfectly working) had significant design flaws, but the 3.0.0 release brings a much-improved versions of them. Note the breaking change! I’ve included a clear migration guide to help easily convert old NN-files to the new format via just a simple sketch. Alternatively, I'm providing limited backwards compatibility through save_old() and load_old(). However, please note that these legacy methods won't receive further updates or improvements over time.


Donate Discord Server

🕸️ NeuralNetworks

18 Oct 10:43

Choose a tag to compare

🕸️ NeuralNetworks

25 Jul 09:12

Choose a tag to compare