Skip to content

Releases: chimezie/mlx-tuning-fork

0.4.0

22 Feb 19:43

Choose a tag to compare

Major synchronization to changes in MLX

Added

  • New mlx_tuning_fork_generate CLI command.
  • Improved documentation
  • Configuration for DoRA fine tuning
  • Summary of how to perform mlx CLI fine tuning using generated parameters
  • Composable configurations
  • axolotl-like configuration parameters for automatically determining values for the mlx_lm fine-tuning parameters

Removed

  • Schedule configiration (use MLXes)
  • Colorize option

0.3.5

18 Apr 15:08

Choose a tag to compare

  • More mlx synchronization: grad checkpoints, updates to batching functionality
  • wandb_sweep.py -f/--prompt-format and --train-type arguments: Support for completion-only sweeps
  • Fix reference to internal temperature argument
  • Add 'min_cos_lr' to Cosine with warmup schedule.
  • Add update of progress bar in training callback
  • Revert to using iterate_batches from mlx_lm. Fix use of progress bar and new datasets types with self-supervized training.

Full Changelog: https://github.com/chimezie/mlx-tuning-fork/commits/0.3.5