All notable changes to this project will be documented in this file.
This project adheres to Semantic Versioning.
Note
We have updated our changelog format!
The changes related to the Colang language and runtime have moved to CHANGELOG-Colang file.
- (llm) Propagate model and base URL in LLMCallException; improve error handling (#1502)
- (content_safety) Add support to auto select multilingual refusal bot messages (#1530)
- (library) Adding GLiNER for PII detection (open alternative to PrivateAI) (#1545)
- (benchmark) Implement Mock LLM streaming (#1564)
- (library) Add reasoning guardrail connector (#1565)
- (models) Surface relevant exception when initializing langchain model (#1516)
- (llm) Filter temperature parameter for OpenAI reasoning models (#1526)
- (bot-thinking) Tackle bug with reasoning trace leak across llm calls (#1582)
- (providers) Handle langchain 1.2.1 dict type for _SUPPORTED_PROVIDERS (#1589)
- (streaming) [breaking] Drop streaming field from config (#1538)
- (test) Reduce default pytest log level from DEBUG to WARNING (#1523)
- (docker) Upgrade to Python 3.12-slim base image (#1522)
- Run pre-commits to update license date for 2026 (#1562)
- Move Benchmark code to top-level (#1559)
- Update repo to https://github.com/NVIDIA-NeMo/Guardrails (#1594)
- Support langchain v1 (#1472)
- (llm) Add LangChain 1.x content blocks support for reasoning and tool calls (#1496)
- (benchmark) Add Procfile to run Guardrails and mock LLMs (#1490)
- (benchmark): Add AIPerf run script ((#1501))
- (llm) Add async streaming support to ChatNVIDIA provider patch (#1504)
- ensure stream_async background task completes before exit (#1508)
- (cli) Fix TypeError in v2.x chat due to incorrect State/dict conversion (#1509)
- (llmrails): skip output rails when dialog disabled and no bot_message provided (#1518)
- (llm): ensure that stop token is not ignored if llm_params is None (#1529)
- (llm) Remove deprecated llm_params module (#1475)
- (llm) Remove custom HTTP headers patch now in langchain-nvidia-ai-endpoints v0.3.19 (#1503)
- (bot-thinking) Implement BotThinking events to process reasoning traces in Guardrails (#1431), (#1432), (#1434).
- (embeddings) Add Azure OpenAI embedding provider (#702).
- (embeddings) Add Cohere embedding integration (#1305).
- (embeddings) Add Google embedding integration (#1304).
- (library) Add Cisco AI Defense integration (#1433).
- (cache) Add in-memory LFU caches for content-safety, topic-control, and jailbreak detection models (#1436), (#1456), (#1457), (#1458).
- (llm) Add automatic provider inference for LangChain LLMs (#1460).
- (llm) Add custom HTTP headers support to ChatNVIDIA provider (#1461).
- (config) Validate content safety and topic control configs at creation time (#1450).
- (jailbreak) Capitalization of
Snowflakein use ofsnowflake-arctic-embed-m-longname. (#1464). - (runtime) Ensure stop flag is set for policy violations in parallel rails (#1467).
- (llm) [breaking] Extract reasoning traces to separate field instead of prepending (#1468).
- (streaming) [breaking] Raise error when stream_async used with disabled output rails streaming (#1470).
- (llm) Add fallback extraction for reasoning traces from tags (#1474).
- (runtime) Set stop flag for exception-based rails in parallel mode (#1487).
- [breaking] Replace reasoning trace extraction with LangChain additional_kwargs (#1427)
- (examples) Add Nemoguard in-memory cache configuration example (#1459), (#1480).
- Add guide for bot reasoning guardrails (#1479).
- Update LLM reasoning traces configuration (#1483).
- Add mock embedding provider tests (#1446)
- (cli) Add comprehensive CLI test suite and reorganize files (#1339)
- Skip FastEmbed tests when not in live mode (#1462)
- Fix flaky stats logging interval timing test (#1463)
- Restore test that was skipped due to Colang 2.0 serialization issue (#1449)
- Resolve PyPI publish workflow trigger and reliability issues (#1443)
- Fix sparse checkout for publish pypi workflow (#1444)
- Drop Python 3.9 support ahead of October 2025 EOL (#1426)
- (types) Add type-annotations and pre-commit checks for tracing (#1388), logging (#1395), kb (#1385), cli (#1380), embeddings (#1383), server (#1397), and llm (#1394) code.
- Update insert licenser pe-commit-hooks to use current year (#1452).
- (library) Remove unused vllm requirements.txt files (#1466).
- (tool-calling) Add tool call passthrough support in LLMRails (#1364)
- (runnable-rails) Complete rewrite of RunnableRails with full LangChain Runnable protocol support (#1366, #1369, #1370, #1405)
- (tool-rails) Add support for tool output rails and validation (#1382)
- (tool-rails) Implement tool input rails for tool message validation and processing (#1386)
- (library) Add Trend Micro Vision One AI Application Security community integration (#1355)
- (llm) Pass llm params directly (#1387)
- (jailbreak) Handle URL joining with/without trailing slashes (#1346)
- (logging) Handle missing id and task in verbose logs (#1343)
- (library) Fix import package declaration to new cleanlab-tlm name (#1401)
- (logging) Add "Tool" type to message sender labeling (#1412)
- (logging) Correct message type formatting in logs (#1416)
- (llm) Remove LLMs isolation for actions (#1408)
- (examples) Add NeMoGuard safety rails config example for Colang 1.0 (#1365)
- Add hardware reqs (#1411)
- Add tools integration guide (#1414)
- (langgraph) Add integration guide for LangGraph (#1422)
- (langchain) Update with full support and add tool calling guide … (#1419)
- (langgraph) Clarify tool examples and replace calculate_math with multiply (#1439)
- (docs) Update v0.16.0 release date in changelog (#1377)
- (docs) Add link to demo.py script in Getting-Started section (#1399)
- (types) Type-clean rails (86 errors) (#1396)
- (jailbreak-detection) Update transformers and torch (#1417)
- (types) Type-clean /actions (189 errors) (#1361)
- (docs) Update repository owner (#1425)
- (llmrails) Support method chaining by returning self from LLMRails.register_* methods (#1296)
- Add Pangea AI Guard community integration (#1300)
- (llmrails) Isolate LLMs only for configured actions (#1342)
- Enhance tracing system with OpenTelemetry semantic conventions (#1331)
- Add GuardrailsAI community integration (#1298)
- (models) Suppress langchain_nvidia_ai_endpoints warnings (#1371)
- (tracing) Respect the user-provided log options regardless of tracing configuration
- (config) Ensure adding RailsConfig objects handles None values (#1328)
- (config) Add handling for config directory with
.yml/.yamlextension (#1293) - (colang) Apply guardrails transformations to LLM inputs and bot outputs. (#1297)
- (topic_safety) Handle InternalEvent objects in topic safety actions for Colang 2.0 (#1335)
- (prompts) Prevent IndexError when LLM provided via constructor with empty models config (#1334)
- (llmrails) Handle LLM models without model_kwargs field in isolation (#1336)
- (llmrails) Move LLM isolation setup to after KB initialization (#1348)
- (llm) Move get_action_details_from_flow_id from llmrails.py to utils.py (#1341)
- Integrate with multilingual NIM (#1354)
- (tracing) Update tracing notebooks with VDR feedback (#1376)
- Add kv cache reuse documentation (#1330)
- (examples) Add Colang 2.0 example for sensitive data detection (#1301)
- Add extra slash to jailbreak detect nim_base_url(#1345)
- Add tracing notebook (#1337)
- Jaeger tracing notebook (#1353)
- (examples) Add NeMoGuard rails config for colang 2 (#1289)
- (tracing) Add OpenTelemetry span format guide (#1350)
- Add GuardrailsAI integration user guide and example (#1357)
- (jailbreak) Add missing pytest.mark.asyncio decorators (#1352)
- (docs) Rename test_csl.py to csl.py (#1347)
- (tracing) [breaking] Update tracing to use otel api (#1269)
- (streaming) Implement parallel streaming output rails execution (#1263, #1324)
- (streaming) Support external async token generators (#1286)
- Support parallel rails execution (#1234, #1323)
- (streaming) Resolve word concatenation in streaming output rails (#1259)
- (streaming) Enable token usage tracking for streaming LLM calls (#1264, #1285)
- (tracing) Prevent mutation of user options when tracing is enabled (#1273)
- (rails) Prevent LLM parameter contamination in rails (#1306)
- Release notes 0.14.1 (#1272)
- Update guardrails-library.md to include Clavata as a third party API (#1294)
- (streaming) Add section on token usage tracking (#1282)
- Add parallel rail section and split config page (#1295)
- Show complete prompts.yml content in getting started tutorial (#1311)
- (tracing) Update and streamline tracing guide (#1307)
- (jailbreak) Add direct API key configuration support (#1260)
- (jailbreak) Lazy load jailbreak detection dependencies (#1223,)
- (llmrails) Constructor LLM should not skip loading other config models (#1221, #1247, #1250, #1258)
- (content_safety) Replace try-except with iterable unpacking for policy violations (#1207)
- (jailbreak) Pin numpy==1.23.5 for scikit-learn compatibility (#1249)
- (output_parsers) Iterable unpacking compatibility in content safety parsers (#1242)
- More heading levels so RNs resolve links (#1228)
- Update docs version (#1219)
- Fix jailbreak detection build instructions (#1248)
- Change ABC bot link at docs (#1261)
- Fix async test failures in cache embeddings and buffer strategy tests (#1237)
- (content_safety) Add tests for content safety actions (#1240)
- Update pre-commit-hooks to v5.0.0 (#1238)
- Change topic following prompt to allow chitchat (#1097)
- Validate model name configuration (#1084)
- Add support for langchain partner and community chat models (#1085)
- Add fuzzy find provider capability to cli (#1088)
- Add code injection detection to guardrails library (#1091)
- Add clavata community integration (#1027)
- Implement validation to forbid dialog rails with reasoning traces (#1137)
- Load yara lazily to avoid action dispatcher error (#1162)
- Add support for system messages to RunnableRails (#1106)
- Add api_key_env_var to Model, pass in kwargs to langchain initializer (#1142)
- Add inline YARA rules support (#1164)
- [breaking] Add support for preserving and optionally applying guardrails to reasoning traces (#1145)
- Prevent reasoning traces from contaminating LLM prompt history (#1169)
- Add RailException support to injection detection and improve error handling (#1178)
- Add Nemotron model support with message-based prompts (#1199)
- Correct task name for self_check_facts (#1040)
- Error in LLMRails with tracing enabled (#1103)
- Self check output colang 1 flow (#1126)
- Use ValueError in TaskPrompt to resolve TypeError raised by Pydantic (#1132)
- Correct dialog rails activation logic (#1161)
- Allow reasoning traces when embeddings_only is True (#1170)
- Prevent explain_info overwrite during stream_async (#1194)
- Colang 2 issues in community integrations (#1140)
- Ensure proper asyncio task cleanup in test_streaming_handler.py (#1182)
- Reorganize HuggingFace provider structure (#1083)
- Remove support for deprecated nemollm engine (#1076)
- [breaking] Remove deprecated return_context argument (#1147)
- Rename
remove_thinking_tracesfield toremove_reasoning_traces(#1176) - Update deprecated field handling for remove_thinking_traces (#1196)
- Introduce END_OF_STREAM sentinel and update handling (#1185)
- Remove markup from code block (#1081)
- Replace img tag with Markdown images (#1087)
- Remove NeMo Service (nemollm) documentation (#1077)
- Update cleanlab integration description (#1080)
- Add providers fuzzy search cli command (#1089)
- Clarify purpose of model parameters field in configuration guide (#1181)
- Output rails are supported with streaming (#1007)
- Add mention of Nemotron (#1200)
- Fix output rail doc (#1159)
- Revise GS example in getting started doc (#1146)
- Possible update to injection detection (#1144)
- Dynamically set version using importlib.metadata (#1072)
- Add link to topic control config and prompts (#1098)
- Reorganize GitHub workflows for better test coverage (#1079)
- Add summary jobs for workflow branch protection (#1120)
- Add Adobe Analytics configuration (#1138)
- Fix and revert poetry lock to its stable state (#1133)
- Add Codecov integration to workflows (#1143)
- Add Python 3.12 and 3.13 test jobs to gitlab workflow (#1171)
- Identify OS packages to install in contribution guide(#1136)
- Remove Got It AI from ToC in 3rd party docs(#1213)
- Support models with reasoning traces (#996)
- Add SHA-256 hashing option (#988)
- Add Fiddler Guardrails integration (#964, #1043)
- Add generation metadata to streaming chunks (#1011)
- Improve alpha to beta bot migration (#878)
- Support multimodal input and output rails (#1033)
- Add support for NemoGuard JailbreakDetect NIM. (#1038)
- Set default start and end reasoning tokens (#1050)
- Improve output rails error handling for SSE format (#1058)
- Ensure parse_task_output is called after all llm_call invocations (#1047)
- Handle exceptions in generate_events to propagate errors in streaming (#1012)
- Ensure output rails streaming is enabled explicitly (#1045)
- Improve multimodal prompt length calculation for base64 images (#1053)
- Move startup and shutdown logic to lifespan in server (#999)
- Add multimodal rails documentation (#1061)
- Add content safety tutorial (#1042)
- Revise reasoning model info (#1062)
- Consider new GS experience (#1005)
- Restore deleted configuration files (#963)
- Add Python 3.12 support (#984)
- Support Output Rails Streaming (#966, #1003)
- Add unified output mapping for actions (#965)
- Add output rails support to activefence integration (#940)
- Add Prompt Security integration (#920)
- Add pii masking capability to PrivateAI integration (#901)
- Add embedding_params to BasicEmbeddingsIndex (#898)
- Add score threshold to AnalyzerEngine (#845)
- Fix dependency resolution issues in AlignScore Dockerfile(#1002, #982)
- Fix JailbreakDetect docker files(#981, #1001)
- Fix TypeError from attempting to unpack already-unpacked dictionary. (#959)
- Fix token stats usage in LLM call info. (#953)
- Handle unescaped quotes in generate_value using safe_eval (#946)
- Handle non-relative file paths (#897)
- Set workdir to models and specify entrypoint explicitly (#1001).
- Output streaming (#976)
- Fix typos with oauthtoken (#957)
- Fix broken link in prompt security (#978)
- Update advanced user guides per v0.11.1 doc release (#937)
- ContentSafety: Add ContentSafety NIM connector (#930) by @prasoonvarshney
- TopicControl: Add TopicControl NIM connector (#930) by @makeshn
- JailbreakDetect: Add jailbreak detection NIM connector (#930) by @erickgalinkin
- AutoAlign Integration: Add further enhancements and refactoring to AutoAlign integration (#867) by @KimiJL
- PrivateAI Integration: Fix Incomplete URL substring sanitization Error (#883) by @NJ-186
-
NVIDIA Blueprint: Add Safeguarding AI Virtual Assistant NIM Blueprint NemoGuard NIMs (#932) by @abodhankar
-
ActiveFence Integration: Fix flow definition in community docs (#890) by @noamlevy81
- Observability: Add observability support with support for different backends (#844) by @Pouyanpi
- Private AI Integration: Add Private AI Integration (#815) by @letmerecall
- Patronus Evaluate API Integration: Patronus Evaluate API Integration (#834) by @varjoshi
- railsignore: Add support for .railsignore file (#790) by @ajanitshimanga
- Sandboxed Environment in Jinja2: Add sandboxed environment in Jinja2 (#799) by @Pouyanpi
- Langchain 3 support: Upgrade LangChain to Version 0.3 (#784) by @Pouyanpi
- Python 3.8: Drop support for Python 3.8 (#803) by @Pouyanpi
- vllm: Bump vllm from 0.2.7 to 0.5.5 for llama_guard and patronusai(#836)
- Guardrails Library documentation": Fix a typo in guardrails library documentation (#793) by @vedantnaik19
- Contributing Guide: Fix incorrect folder name & pre-commit setup in CONTRIBUTING.md (#800)
- Contributing Guide: Added correct Python command version in documentation(#801) by @ravinder-tw
- retrieve chunk action: Fix presence of new line in retrieve chunk action (#809) by @Pouyanpi
- Standard Library import: Fix guardrails standard library import path in Colang 2.0 (#835) by @Pouyanpi
- AlignScore Dockerfile: Add nltk's punkt_tab in align_score Dockerfile (#841) by @yonromai
- Eval dependencies: Make pandas version constraint explicit for eval optional dependency (#847) by @Pouyanpi
- tests: Mock PromptSession to prevent console error (#851) by @Pouyanpi
- *Streaming: Handle multiple output parsers in generation (#854) by @Pouyanpi
- User Guide: Update role from bot to assistant (#852) by @Pouyanpi
- Installation Guide: Update optional dependencies install (#853) by @Pouyanpi
- Documentation Restructuring: Restructure the docs and several style enhancements (#855) by @Pouyanpi
- Got It AI deprecation: Add deprecation notice for Got It AI integration (#857) by @mlmonk
- Colang 2.0-beta.4 patch
- content safety: Implement content safety module (#674) by @Pouyanpi
- migration tool: Enhance migration tool capabilities (#624) by @Pouyanpi
- Cleanlab Integration: Add Cleanlab's Trustworthiness Score (#572) by @AshishSardana
- Colang 2: LLM chat interface development (#709) by @schuellc-nvidia
- embeddings: Add relevant chunk support to Colang 2 (#708) by @Pouyanpi
- library: Migrate Cleanlab to Colang 2 and add exception handling (#714) by @Pouyanpi
- Colang debug library: Develop debugging tools for Colang (#560) by @schuellc-nvidia
- debug CLI: Extend debugging command-line interface (#717) by @schuellc-nvidia
- embeddings: Add support for embeddings only with search threshold (#733) by @Pouyanpi
- embeddings: Add embedding-only support to Colang 2 (#737) by @Pouyanpi
- embeddings: Add relevant chunks prompts (#745) by @Pouyanpi
- gcp moderation: Implement GCP-based moderation tools (#727) by @kauabh
- migration tool: Sample conversation syntax conversion (#764) by @Pouyanpi
- llmrails: Add serialization support for LLMRails (#627) by @Pouyanpi
- exceptions: Initial support for exception handling (#384) by @drazvan
- evaluation tooling: Develop new evaluation tools (#677) by @drazvan
- Eval UI: Add support for tags in the Evaluation UI (#731) by @drazvan
- guardrails library: Launch Colang 2.0 Guardrails Library (#689) by @drazvan
- configuration: Revert abc bot to Colang v1 and separate v2 configuration (#698) by @drazvan
-
api: Update Pydantic validators (#688) by @Pouyanpi
-
standard library: Refactor and migrate standard library components (#625) by @Pouyanpi
-
Upgrade langchain-core and jinja2 dependencies (#766) by @Pouyanpi
- documentation: Fix broken links (#670) by @buvnswrn
- hallucination-check: Correct hallucination-check functionality (#679) by @Pouyanpi
- streaming: Fix NVIDIA AI endpoints streaming issues (#654) by @Pouyanpi
- hallucination-check: Resolve non-OpenAI hallucination check issue (#681) by @Pouyanpi
- import error: Fix Streamlit import error (#686) by @Pouyanpi
- prompt override: Fix override prompt self-check facts (#621) by @Pouyanpi
- output parser: Resolve deprecation warning in output parser (#691) by @Pouyanpi
- patch: Fix langchain_nvidia_ai_endpoints patch (#697) by @Pouyanpi
- runtime issues: Address Colang 2 runtime issues (#699) by @schuellc-nvidia
- send event: Change 'send event' to 'send' (#701) by @Pouyanpi
- output parser: Fix output parser validation (#704) by @Pouyanpi
- passthrough_fn: Pass config and kwargs to passthrough_fn runnable (#695) by @vpr1995
- rails exception: Fix rails exception migration (#705) by @Pouyanpi
- migration: Replace hyphens and apostrophes in migration (#725) by @Pouyanpi
- flow generation: Fix LLM flow continuation generation (#724) by @schuellc-nvidia
- server command: Fix CLI server command (#723) by @Pouyanpi
- embeddings filesystem: Fix cache embeddings filesystem (#722) by @Pouyanpi
- outgoing events: Process all outgoing events (#732) by @sklinglernv
- generate_flow: Fix a small bug in the generate_flow action for Colang 2 (#710) by @drazvan
- triggering flow id: Fix the detection of the triggering flow id (#728) by @drazvan
- LLM output: Fix multiline LLM output syntax error for dynamic flow generation (#748) by @radinshayanfar
- scene form: Fix the scene form and choice flows in the Colang 2 standard library (#741) by @sklinglernv
- Cleanlab: Update community documentation for Cleanlab integration (#713) by @Pouyanpi
- rails exception handling: Add notes for Rails exception handling in Colang 2.x (#744) by @Pouyanpi
- LLM per task: Document LLM per task functionality (#676) by @Pouyanpi
- relevant_chunks: Add the
relevant_chunksto the GPT-3.5 general prompt template (#678) by @drazvan - flow names: Ensure flow names don't start with keywords (#637) by @schuellc-nvidia
- #650 Fix gpt-3.5-turbo-instruct prompts #651.
- Colang version 2.0-beta.2
- #370 Add Got It AI's Truthchecking service for RAG applications by @mlmonk.
- #543 Integrating AutoAlign's guardrail library with NeMo Guardrails by @abhijitpal1247.
- #566 Autoalign factcheck examples by @abhijitpal1247.
- #518 Docs: add example config for using models with ollama by @vedantnaik19.
- #538 Support for
--default-config-idin the server. - #539 Support for
LLMCallException. - #548 Support for custom embedding models.
- #617 NVIDIA AI Endpoints embeddings.
- #462 Support for calling embedding models from langchain-nvidia-ai-endpoints.
- #622 Patronus Lynx Integration.
- #597 Make UUID generation predictable in debug-mode.
- #603 Improve chat cli logging.
- #551 Upgrade to Langchain 0.2.x by @nicoloboschi.
- #611 Change default templates.
- #545 NVIDIA API Catalog and NIM documentation update.
- #463 Do not store pip cache during docker build by @don-attilio.
- #629 Move community docs to separate folder.
- #647 Documentation updates.
- #648 Prompt improvements for Llama-3 models.
- #482 Update README.md by @curefatih.
- #530 Improve the test serialization test to make it more robust.
- #570 Add support for FacialGestureBotAction by @elisam0.
- #550 Fix issue #335 - make import errors visible.
- #547 Fix LLMParams bug and add unit tests (fixes #158).
- #537 Fix directory traversal bug.
- #536 Fix issue #304 NeMo Guardrails packaging.
- #539 Fix bug related to the flow abort logic in Colang 1.0 runtime.
- #612 Follow-up fixes for the default prompt change.
- #585 Fix Colang 2.0 state serialization issue.
- #486 Fix select model type and custom prompts task.py by @cyun9601.
- #487 Fix custom prompts configuration manual.md.
- #479 Fix static method and classmethod action decorators by @piotrm0.
- #544 Fix issue #216 bot utterance.
- #616 Various fixes.
- #623 Fix path traversal check.
- #461 Feature/ccl cleanup.
- #483 Fix dictionary expression evaluation bug.
- #467 Feature/colang doc related cleanups.
- #484 Enable parsing of
..."<NLD>"expressions. - #478 Fix #420 - evaluate not working with chat models.
- #453 Update documentation for NVIDIA API Catalog example.
- #382 Fix issue with
lowest_temperaturein self-check and hallucination rails. - #454 Redo fix for #385.
- #442 Fix README type by @dileepbapat.
- #402 Integrate Vertex AI Models into Guardrails by @aishwaryap.
- #403 Add support for NVIDIA AI Endpoints by @patriciapampanelli
- #396 Docs/examples nv ai foundation models.
- #438 Add research roadmap documentation.
- #389 Expose the
verboseparameter throughRunnableRailsby @d-mariano. - #415 Enable
print(...)andlog(...). - #389 Expose verbose arg in RunnableRails by @d-mariano.
- #414 Feature/colang march release.
- #416 Refactor and improve the verbose/debug mode.
- #418 Feature/colang flow context sharing.
- #425 Feature/colang meta decorator.
- #427 Feature/colang single flow activation.
- #426 Feature/colang 2.0 tutorial.
- #428 Feature/Standard library and examples.
- #431 Feature/colang various improvements.
- #433 Feature/Colang 2.0 improvements: generate_async support, stateful API.
- #412 Fix #411 - explain rails not working for chat models.
- #413 Typo fix: Comment in llm_flows.co by @habanoz.
- #420 Fix typo for hallucination message.
- #377 Add example for streaming from custom action.
- #380 Update installation guide for OpenAI usage.
- #401 Replace YAML import with new import statement in multi-modal example.
- #398 Colang parser fixes and improvements.
- #394 Fixes and improvements for Colang 2.0 runtime.
- #381 Fix typo by @serhatgktp.
- #379 Fix missing prompt in verbose mode for chat models.
- #400 Fix Authorization header showing up in logs for NeMo LLM.
- #292 Jailbreak heuristics by @erickgalinkin.
- #256 Support generation options.
- #307 Added support for multi-config api calls by @makeshn.
- #293 Adds configurable stop tokens by @zmackie.
- #334 Colang 2.0 - Preview by @schuellc.
- #208 Implement cache embeddings (resolves #200) by @Pouyanpi.
- #331 Huggingface pipeline streaming by @trebedea.
Documentation:
- #311 Update documentation to demonstrate the use of output rails when using a custom RAG by @niels-garve.
- #347 Add detailed logging docs by @erickgalinkin.
- #354 Input and output rails only guide by @trebedea.
- #359 Added user guide for jailbreak detection heuristics by @makeshn.
- #363 Add multi-config API call user guide.
- #297 Example configurations for using only the guardrails, without LLM generation.
- #309 Change the paper citation from ArXiV to EMNLP 2023 by @manuelciosici
- #319 Enable embeddings model caching.
- #267 Make embeddings computing async and add support for batching.
- #281 Follow symlinks when building knowledge base by @piotrm0.
- #280 Add more information to results of
retrieve_relevant_chunksby @piotrm0. - #332 Update docs for batch embedding computations.
- #244 Docs/edit getting started by @DougAtNvidia.
- #333 Follow-up to PR 244.
- #341 Updated 'fastembed' version to 0.2.2 by @NirantK.
- #286 Fixed #285 - using the same evaluation set given a random seed for topical rails by @trebedea.
- #336 Fix #320. Reuse the asyncio loop between sync calls.
- #337 Fix stats gathering in a parallel async setup.
- #342 Fixes OpenAI embeddings support.
- #346 Fix issues with KB embeddings cache, bot intent detection and config ids validator logic.
- #349 Fix multi-config bug, asyncio loop issue and cache folder for embeddings.
- #350 Fix the incorrect logging of an extra dialog rail.
- #358 Fix Openai embeddings async support.
- #362 Fix the issue with the server being pointed to a folder with a single config.
- #352 Fix a few issues related to jailbreak detection heuristics.
- #356 Redo followlinks PR in new code by @piotrm0.
- #288 Replace SentenceTransformers with FastEmbed.
- #254 Support for Llama Guard input and output content moderation.
- #253 Support for server-side threads.
- #235 Improved LangChain integration through
RunnableRails. - #190 Add example for using
generate_events_asyncwith streaming. - Support for Python 3.11.
- #286 Fixed not having the same evaluation set given a random seed for topical rails.
- #239 Fixed logging issue where
verbose=trueflag did not trigger expected log output. - #228 Fix docstrings for various functions.
- #242 Fix Azure LLM support.
- #225 Fix annoy import, to allow using without.
- #209 Fix user messages missing from prompt.
- #261 Fix small bug in
print_llm_calls_summary. - #252 Fixed duplicate loading for the default config.
- Fixed the dependencies pinning, allowing a wider range of dependencies versions.
- Fixed sever security issues related to uncontrolled data used in path expression and information exposure through an exception.
- Support for
--versionflag in the CLI.
- Upgraded
langchainto0.0.352. - Upgraded
httpxto0.24.1. - Replaced deprecated
text-davinci-003model withgpt-3.5-turbo-instruct.
- #191: Fix chat generation chunk issue.
- Support for explicit definition of input/output/retrieval rails.
- Support for custom tasks and their prompts.
- Support for fact-checking using AlignScore.
- Support for NeMo LLM Service as an LLM provider.
- Support for making a single LLM call for both the guardrails process and generating the response (by setting
rails.dialog.single_call.enabledtoTrue). - Support for sensitive data detection guardrails using Presidio.
- Example using NeMo Guardrails with the LLaMa2-13B model.
- Dockerfile for building a Docker image.
- Support for prompting modes using
prompting_mode. - Support for TRT-LLM as an LLM provider.
- Support for streaming the LLM responses when no output rails are used.
- Integration of ActiveFence ActiveScore API as an input rail.
- Support for
--prefixand--auto-reloadin the guardrails server. - Example authentication dialog flow.
- Example RAG using Pinecone.
- Support for loading a configuration from dictionary, i.e.
RailsConfig.from_content(config=...). - Guidance on LLM support.
- Support for
LLMRails.explain()(see the Getting Started guide for sample usage).
- Allow context data directly in the
/v1/chat/completionusing messages with the type"role". - Allow calling a subflow whose name is in a variable, e.g.
do $some_name. - Allow using actions which are not
asyncfunctions. - Disabled pretty exceptions in CLI.
- Upgraded dependencies.
- Updated the Getting Started Guide.
- Main README now provides more details.
- Merged original examples into a single ABC Bot and removed the original ones.
- Documentation improvements.
- Fix going over the maximum prompt length using the
max_lengthattribute in Prompt Templates. - Fixed problem with
nest_asyncioinitialization. - #144 Fixed TypeError in logging call.
- #121 Detect chat model using openai engine.
- #109 Fixed minor logging issue.
- Parallel flow support.
- Fix
HuggingFacePipelinebug related to LangChain version upgrade.
- Support for custom configuration data.
- Example for using custom LLM and multiple KBs
- Support for
PROMPTS_DIR. - #101 Support for using OpenAI embeddings models in addition to SentenceTransformers.
- First set of end-to-end QA tests for the example configurations.
- Support for configurable embedding search providers
- Moved to using
nest_asynciofor implementing the blocking API. Fixes #3 and #32. - Improved event property validation in
new_event_dict. - Refactored imports to allow installing from source without Annoy/SentenceTransformers (would need a custom embedding search provider to work).
- Fixed when the
initfunction fromconfig.pyis called to allow custom LLM providers to be registered inside. - #93: Removed redundant
hasattrcheck innemoguardrails/llm/params.py. - #91: Fixed how default context variables are initialized.
- Event-based API for guardrails.
- Support for message with type "event" in
LLMRails.generate_async. - Support for bot message instructions.
- Support for using variables inside bot message definitions.
- Support for
vicuna-7b-v1.3andmpt-7b-instruct. - Topical evaluation results for
vicuna-7b-v1.3andmpt-7b-instruct. - Support to use different models for different LLM tasks.
- Support for red-teaming using challenges.
- Support to disable the Chat UI when running the server using
--disable-chat-ui. - Support for accessing the API request headers in server mode.
- Support to enable CORS settings for the guardrails server.
- Changed the naming of the internal events to align to the upcoming UMIM spec (Unified Multimodal Interaction Management).
- If there are no user message examples, the bot messages examples lookup is disabled as well.
- #58: Fix install on Mac OS 13.
- #55: Fix bug in example causing config.py to crash on computers with no CUDA-enabled GPUs.
- Fixed the model name initialization for LLMs that use the
modelkwarg. - Fixed the Cohere prompt templates.
- #55: Fix bug related to LangChain callbacks initialization.
- Fixed generation of "..." on value generation.
- Fixed the parameters type conversion when invoking actions from Colang (previously everything was string).
- Fixed
model_kwargsproperty for theWrapperLLM. - Fixed bug when
stopwas used inside flows. - Fixed Chat UI bug when an invalid guardrails configuration was used.
- Support for defining subflows.
- Improved support for customizing LLM prompts
- Support for using filters to change how variables are included in a prompt template.
- Output parsers for prompt templates.
- The
verbose_v1formatter and output parser to be used for smaller models that don't understand Colang very well in a few-shot manner. - Support for including context variables in prompt templates.
- Support for chat models i.e. prompting with a sequence of messages.
- Experimental support for allowing the LLM to generate multi-step flows.
- Example of using Llama Index from a guardrails configuration (#40).
- Example for using HuggingFace Endpoint LLMs with a guardrails configuration.
- Example for using HuggingFace Pipeline LLMs with a guardrails configuration.
- Support to alter LLM parameters passed as
model_kwargsin LangChain. - CLI tool for running evaluations on the different steps (e.g., canonical form generation, next steps, bot message) and on existing rails implementation (e.g., moderation, jailbreak, fact-checking, and hallucination).
- Initial evaluation results for
text-davinci-003andgpt-3.5-turbo. - The
lowest_temperaturecan be set through the guardrails config (to be used for deterministic tasks).
- The core templates now use Jinja2 as the rendering engines.
- Improved the internal prompting architecture, now using an LLM Task Manager.
- Fixed bug related to invoking a chain with multiple output keys.
- Fixed bug related to tracking the output stats.
- #51: Bug fix - avoid str concat with None when logging user_intent.
- #54: Fix UTF-8 encoding issue and add embedding model configuration.
- Support to connect any LLM that implements the BaseLanguageModel interface from LangChain.
- Support for customizing the prompts for specific LLM models.
- Support for custom initialization when loading a configuration through
config.py. - Support to extract user-provided values from utterances.
- Improved the logging output for Chat CLI (clear events stream, prompts, completion, timing information).
- Updated system actions to use temperature 0 where it makes sense, e.g., canonical form generation, next step generation, fact checking, etc.
- Excluded the default system flows from the "next step generation" prompt.
- Updated langchain to 0.0.167.
- Fixed initialization of LangChain tools.
- Fixed the overriding of general instructions #7.
- Fixed action parameters inspection bug #2.
- Fixed bug related to multi-turn flows #13.
- Fixed Wolfram Alpha error reporting in the sample execution rail.
- First alpha release.