Releases: guidance-ai/guidance
Releases · guidance-ai/guidance
0.0.63
0.0.62
What's Changed
- fix: Changed the example output to match the behaviour of the code. by @FunnyPhantom in #176
- Allow variables to start with "or" or "else" by @Mihaiii in #179
- Fix examples for issues #172 by @Keiku in #182
- New variable_stack for the parser by @slundberg in #193
- fixed openai streaming with REST by @marcotcr in #208
- Now using pyparsing and supporting infix notation! by @slundberg in #230
New Contributors
- @FunnyPhantom made their first contribution in #176
- @Keiku made their first contribution in #182
Full Changelog: 0.0.61...0.0.62
0.0.61
- Adds working support for Azure OpenAI
Full Changelog: 0.0.60...0.0.61
0.0.60
- This is another quick patch for a cache issue when loading transformer models manually.
Full Changelog: 0.0.59...0.0.60
0.0.59
- Just a patch to address a cache bug.
Full Changelog: 0.0.58...0.0.59
0.0.58
What's Changed
- Support for multi-token token healing (healing that requires backing up more than one token).
- Support falcon models.
- Fix GPT proverb example with multiple gens by @bcstyle in #137
- Fix geneach demo code by @Shaunwei in #149
- Enhance the cache ability by @SimFG in #123
- Fix partials and empty blocks by @justinbowes in #155
- Adding Microsoft SECURITY.MD by @microsoft-github-policy-service in #168
New Contributors
- @bcstyle made their first contribution in #137
- @Shaunwei made their first contribution in #149
- @SimFG made their first contribution in #123
- @justinbowes made their first contribution in #155
- @microsoft-github-policy-service made their first contribution in #168
Full Changelog: 0.0.57...0.0.58
0.0.57
What's Changed
- Now silent=True is the default when streaming the program output.
- Fixed some bugs with caching, and also
select
with special EOS tokens in LLaMA. - Fix typos by @amqdn in #135
New Contributors
Full Changelog: 0.0.56...0.0.57
0.0.56
What's Changed
- New support for
stop_regex
with OpenAI models. - New support for streaming the results of program execution programmatically both sync and async.
- Update test_unless.py by @Jvr2022 in #96
- Add gpt-4-32k and gpt-4-32k-0314 to OpenAI Models by @flavius-popan in #98
- Support models that doesn't output
past_key_values
by @younesbelkada in #91 - add quantization parameters for transformers models by @jquesnelle in #8
- Fix unnecessary log.txt file creation by @gerelltroche in #106
- Fix TypeError when serializing out.variables() to JSON by @h-k-nyosu in #120
- fix: Add missing msal dependency. by @shawnohare in #68
- Export LLMSession and SyncSession by @abetlen in #66
- add timeout parameter to POST request by @satarupaguha11 in #125
New Contributors
- @Jvr2022 made their first contribution in #96
- @flavius-popan made their first contribution in #98
- @younesbelkada made their first contribution in #91
- @jquesnelle made their first contribution in #8
- @gerelltroche made their first contribution in #106
- @h-k-nyosu made their first contribution in #120
- @shawnohare made their first contribution in #68
- @abetlen made their first contribution in #66
- @satarupaguha11 made their first contribution in #125
Full Changelog: 0.0.54...0.0.56
0.0.54
- Switch the
select
command over to a token-id-based process to account for possible non-greedy tokenizer exceptions.
Full Changelog: 0.0.53...0.0.54
0.0.53
What's Changed
- Allow for select statements to have options that are prefixes of other options (and fix one select bug with long options).
- fix repo typo by @marcosmagallanes in #86
- fix typo by @kajarenc in #85
New Contributors
- @marcosmagallanes made their first contribution in #86
- @kajarenc made their first contribution in #85
Full Changelog: 0.0.52...0.0.53