Releases: brainsqueeze/text2vec
Releases · brainsqueeze/text2vec
v2.0.3
v2.0.2
Minor fixes including docstring updates and removal of layer name overrides.
v2.0.1
v1.2.0
- Breaking changes on
ServingModel
wrapper class, now with dictionary outputs - new strings module with
SubTokenFinderMask
class which performs ragged substring searches and masking - More flexible encoder/decoder network flow
v1.0.0
- Breaking changes for some layers, some layers deprecated (in README)
- Training loop via /bin/main.py is now deprecated and will be removed in future versions
- Included data is removed in favor of the much richer HuggingFace datasets library
- More flexible API for training auto-encoders
- Leverage HuggingFace tokenizers
v0.4.3
- Improved training performance
- Slimmed down
tf.saved_model
output for inference - Better documentation.
v0.2.2
Updates include:
- YAML config for training to avoid long CLI inputs
- More flexible method for handling external training data
- convenience CLI function
- PyYAML dependency added
- some improvements to TensorBoard logging for scalar quantities
v0.1.1-beta
Remove the encoding and decoding masking on the Bahdanau attention during the transformer decode pipeline.
v0.1
Initial release of text2vec. This includes tools for creating attention-based and LSTM-based transformer models for turning sentences into vectors which encode contextual meaning.