Releases: NPC-Worldwide/npcpy
Releases · NPC-Worldwide/npcpy
v1.4.17
08 Apr 01:14
Compare
Sorry, something went wrong.
No results found
Default search to startpage, cascade to searxng then ddgs
Activity logging tables (activity_log, autocomplete_suggestions, autocomplete_training)
API endpoints for activity/autocomplete logging and training data export
Fix memory scope query to not require all filters
v1.4.16
06 Apr 21:55
Compare
Sorry, something went wrong.
No results found
fix: flush SSE events immediately for real-time streaming, prompt chat before stop in agentic loop
v1.4.15
04 Apr 13:10
Compare
Sorry, something went wrong.
No results found
v1.4.14
04 Apr 12:52
Compare
Sorry, something went wrong.
No results found
Add Sememolution population-based KG evolution module (kg_population.py) with GeneticEvolver integration, Poisson-sampled search traversal, per-individual graph state, crossover with graph merging, and LLM-judged response ranking
Add memory extraction pipeline and sememolution documentation to knowledge-graphs guide
Add MLX Apple Silicon documentation to fine-tuning guide
Add KG, memory, sememolution, and fine-tuning examples to README
Remove dead shell=True subprocess fallback in _tool_web_search (closes #207 )
Fix _tool_file_search to use list-form subprocess without shell=True
v1.4.13
03 Apr 04:03
Compare
Sorry, something went wrong.
No results found
Pass user generation params (temperature, top_p, top_k, max_tokens) to LLM in both chat and tool_agent modes
Make serve.py robust for Windows — optional imports for redis/flask_sse/mcp, better error handling, NPCSH_BASE env var
Fix MCP server engine step rendering — action/args from _raw_steps now template-rendered with tool call arguments
Add Windows CI tests for serve.py imports, settings round-trip, and server startup
v1.4.12
02 Apr 21:50
Compare
Sorry, something went wrong.
No results found
device routing in ft modules (sft, rl, usft): device='mlx'|'cpu'|'cuda'
MLX LoRA training via mlx-lm Python API on Apple Silicon
HF model name → mlx-community resolution
backwards compatible, default device='cpu'
v1.4.11
28 Mar 22:45
Compare
Sorry, something went wrong.
No results found
stream_events plumbing from jinx execution through generator chain
v1.4.10
28 Mar 19:47
Compare
Sorry, something went wrong.
No results found
Inline generator in check_llm_command, no separate function
create_jinx_stream takes (npc, command) directly, no StreamConfig
Sub-delegation events via shared_context['sub_events']
v1.4.9
28 Mar 18:23
Compare
Sorry, something went wrong.
No results found
Generator-based streaming for check_llm_command (stream=True yields events)
No threads/queues — clean generator protocol
Chat streams token by token, tools emit tool_start/tool_result events
v1.4.8
28 Mar 17:19
Compare
Sorry, something went wrong.
No results found
Threaded check_llm_command in create_jinx_stream with keepalive SSE events
Prevents SSE timeout during long delegation
Event queue for jinxes to push real-time progress