diff --git a/.gitignore b/.gitignore
index a498a45..8456922 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,6 +7,7 @@
**/*.dev.yml
wiki_t2v/
+multi_news_t2v*/
# JavaScript configs and dependencies
**/.eslintrc.json
diff --git a/README.md b/README.md
index 0fed1ea..1b808ae 100644
--- a/README.md
+++ b/README.md
@@ -5,20 +5,16 @@ Models for contextual embedding of arbitrary texts.
## Setup
---
-To get started, one should have a flavor of TensorFlow installed, with
-version `>=2.4.1`. One can run
+To get started, one should have a flavor of TensorFlow installed, with version `>=2.4.1`. One can run
```bash
pip install tensorflow>=2.4.1
```
-If one wishes to run the examples, some additional dependencies
-from HuggingFace will need to be installed. The full installation
-looks like
+If one wishes to run the examples, some additional dependencies from HuggingFace will need to be installed. The full installation looks like
```bash
pip install tensorflow>=2.4.1 tokenizers datasets
```
-To install the core components as an import-able Python library
-simply run
+To install the core components as an import-able Python library simply run
```bash
pip install git+https://github.com/brainsqueeze/text2vec.git
@@ -27,119 +23,46 @@ pip install git+https://github.com/brainsqueeze/text2vec.git
## Motivation
---
-Word embedding models have been very beneficial to natural
-language processing. The technique is able to distill semantic
-meaning from words by treating them as vectors in a
-high-dimensional vector space.
-
-This package attempts to accomplish the same semantic embedding,
-but do this at the sentence and paragraph level. Within a
-sentence the order of words and the use of punctuation and
-conjugations are very important for extracting the meaning
-of blocks of text.
-
-Inspiration is taken from recent advances in text summary
-models (pointer-generator), where an attention mechanism
-[[1](https://arxiv.org/abs/1409.0473)] is
-used to extrude the overall meaning of the input text. In the
-case of text2vec, we use the attention vectors found from the
-input text as the embedded vector representing the input.
-Furthermore, recent attention-only approaches to sequence-to-sequence
-modeling are adapted.
-
-**note**: this is not a canonical implementation of the attention
-mechanism, but this method was chosen intentionally to be able to
-leverage the attention vector as the embedding output.
+Word embedding models have been very beneficial to natural language processing. The technique is able to distill semantic meaning from words by treating them as vectors in a high-dimensional vector space.
+
+This package attempts to accomplish the same semantic embedding, but do this at the sentence and paragraph level. Within a sentence the order of words and the use of punctuation and conjugations are very important for extracting the meaning of blocks of text.
+
+Inspiration is taken from recent advances in text summary models (pointer-generator), where an attention mechanism [[1](https://arxiv.org/abs/1409.0473)] is used to extrude the overall meaning of the input text. In the case of text2vec, we use the attention vectors found from the input text as the embedded vector representing the input. Furthermore, recent attention-only approaches to sequence-to-sequence modeling are adapted.
+
### Transformer model
---
-This is a tensor-to-tensor model adapted from the work in
-[Attention Is All You Need](https://arxiv.org/abs/1706.03762).
-The embedding and encoding steps follow directly from
-[[2](https://arxiv.org/abs/1706.03762)], however a self-
-attention is applied at the end of the encoding steps and a
-context-vector is learned, which in turn is used to project
-the decoding tensors onto.
-
-The decoding steps begin as usual with the word-embedded input
-sequences shifted right, then multi-head attention, skip connection
-and layer-normalization is applied. Before continuing, we project
-the resulting decoded sequences onto the context-vector from the
-encoding steps. The projected tensors are then passed through
-the position-wise feed-forward (conv1D) + skip connection and layer-
-normalization again, before once more being projected onto the
-context-vectors.
+This is a tensor-to-tensor model adapted from the work in [Attention Is All You Need](https://arxiv.org/abs/1706.03762). The embedding and encoding steps follow directly from [[2](https://arxiv.org/abs/1706.03762)], however a self-attention is applied at the end of the encoding steps and a context-vector is learned, which in turn is used to project the decoding tensors onto.
+
+The decoding steps begin as usual with the word-embedded input sequences shifted right, then multi-head attention, skip connection and layer-normalization is applied. Before continuing, we project the resulting decoded sequences onto the context-vector from the encoding steps. The projected tensors are then passed through the position-wise feed-forward (conv1D) + skip connection and layer-normalization again, before once more being projected onto the context-vectors.
### LSTM seq2seq
-This is an adapted bi-directional LSTM encoder-decoder model with
-a self-attention mechanism learned from the encoding steps. The
-context-vectors are used to project the resulting decoded sequences
-before computing logits.
+This is an adapted bi-directional LSTM encoder-decoder model with a self-attention mechanism learned from the encoding steps. The context-vectors are used to project the resulting decoded sequences before computing logits.
## Training
---
-Both models are trained using Adam SGD with the learning-rate decay
-program in [[2](https://arxiv.org/abs/1706.03762)].
+Both models are trained using Adam SGD with the learning-rate decay program in [[2](https://arxiv.org/abs/1706.03762)].
-The pre-built auto-encoder models inherit from [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), and as such they can be trained using the `fit` method.
-An example of training on Wikitext data is available in the [examples folder](./examples/trainers/wiki_transformer.py). This uses HuggingFace [tokenizers](https://huggingface.co/docs/tokenizers/python/latest/) and [datasets](https://huggingface.co/docs/datasets/master/).
+The pre-built auto-encoder models inherit from [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), and as such they can be trained using the [fit method](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit). Training examples are available in the [examples folder](./examples/trainers). This uses HuggingFace [tokenizers](https://huggingface.co/docs/tokenizers/python/latest/) and [datasets](https://huggingface.co/docs/datasets/master/).
-Training the LSTM model can be initiated with
+If you wish to run the example training scripts then you will need to clone the repository
```bash
-text2vec_main --run=train --yaml_config=/path/to/config.yml
-```
-The training configuration YAML for attention models must look like
-```yaml
-training:
- tokens: 10000
- max_sequence_length: 512
- epochs: 100
- batch_size: 64
-
-model:
- name: transformer_test
- parameters:
- embedding: 128
- layers: 8
- storage_dir: /path/to/save/model
-```
-The `parameters` for recurrent models must include at least
-`embedding` and `hidden`, which referes to the dimensionality of the hidden LSTM layer. The `training` section of the YAML file can also include user-defined sentences to use as a context-angle evaluation set. This can look like
-```yaml
-eval_sentences:
- - The movie was great!
- - The movie was terrible.
-```
-It can also include a `data` tag which is a list of absolute file paths for custom training data sets. This can look like
-```yaml
-data_files:
- - ~/path/to/data/set1.txt
- - ~/path/to/data/set2.txt
- ...
+git clone https://github.com/brainsqueeze/text2vec.git
```
-
-Likewise, the transformer model can be trained with
+and then run either
```bash
-text2vec_main --run=train --attention --yaml_config=/path/to/config.yml
+python -m examples.trainers.news_transformer
```
-
-To view the output of training you can then run
+for the attention-based transformer, or
```bash
-tensorboard --logdir text_embedding
+python -m examples.trainers.news_lstm
```
+for the LSTM-based encoder. These examples use the [Multi-News](https://github.com/Alex-Fabbri/Multi-News) dataset via [HuggingFace](https://huggingface.co/datasets/multi_news).
-If you have CUDA and cuDNN installed you can run
-`pip install -r requirements-gpu.txt`.
-The GPU will automatically be detected and used if present, otherwise
-it will fall back to the CPU for training and inferencing.
-
-### Mutual contextual orthogonality
-
-To impose quasi-mutual orthogonality on the learned context vectors simply add the `--orthogonal` flag to the training command. This will add a loss term that can be thought of as a Lagrange multiplier where the constraint is self-alignment of the context vectors, and orthogonality between non-self vectors. The aim is not to impose orthogonality between all text inputs that are not the same, but rather to coerce the model to learn significantly different encodings for different contextual inputs.
## Python API
@@ -149,33 +72,32 @@ Text2vec includes a Python API with convenient classes for handling attention an
#### Auto-encoders
- - [text2vec.autoencoders.TransformerAutoEncoder](/text2vec/autoencoders.py#L13)
- - [text2vec.autoencoders.LstmAutoEncoder](/text2vec/models/transformer.py#L134)
+ - [text2vec.autoencoders.TransformerAutoEncoder](/text2vec/autoencoders.py#L12)
+ - [text2vec.autoencoders.LstmAutoEncoder](/text2vec/models/transformer.py#L190)
#### Layers
- - [text2vec.models.TransformerEncoder](/text2vec/models/transformer.py#L11)
- - [text2vec.models.TransformerDecoder](/text2vec/models/transformer.py#L81)
- - [text2vec.models.RecurrentEncoder](/text2vec/models/sequential.py#L8)
- - [text2vec.models.RecurrentDecoder](/text2vec/models/sequential.py#L61)
+ - [text2vec.models.TransformerEncoder](/text2vec/models/transformer.py#L8)
+ - [text2vec.models.TransformerDecoder](/text2vec/models/transformer.py#L78)
+ - [text2vec.models.RecurrentEncoder](/text2vec/models/sequential.py#L9)
+ - [text2vec.models.RecurrentDecoder](/text2vec/models/sequential.py#L65)
#### Input and Word-Embeddings Components
- - [text2vec.models.Tokenizer](/text2vec/models/components/feeder.py#L4)
- - [text2vec.models.Embed](/text2vec/models/components/text_inputs.py#L4)
- - [text2vec.models.TokenEmbed](/text2vec/models/components/text_inputs.py#L82)
- - [text2vec.models.TextInput](/text2vec/models/components/feeder.py#L35) (DEPRECATED)
+ - [text2vec.models.Tokenizer](/text2vec/models/components/text_inputs.py#L5)
+ - [text2vec.models.Embed](/text2vec/models/components/text_inputs.py#L36)
+ - [text2vec.models.TokenEmbed](/text2vec/models/components/text_inputs.py#L116)
#### Attention Components
- - [text2vec.models.components.attention.ScaledDotAttention](/text2vec/models/components/attention.py#L4)
- - [text2vec.models.components.attention.SingleHeadAttention](/text2vec/models/components/attention.py#L111)
- - [text2vec.models.MultiHeadAttention](/text2vec/models/components/attention.py#L175)
- - [text2vec.models.BahdanauAttention](/text2vec/models/components/attention.py#L53)
+ - [text2vec.models.components.attention.ScaledDotAttention](/text2vec/models/components/attention.py#L7)
+ - [text2vec.models.components.attention.SingleHeadAttention](/text2vec/models/components/attention.py#L115)
+ - [text2vec.models.MultiHeadAttention](/text2vec/models/components/attention.py#L179)
+ - [text2vec.models.BahdanauAttention](/text2vec/models/components/attention.py#L57)
#### LSTM Components
- - [text2vec.models.BidirectionalLSTM](/text2vec/models/components/recurrent.py#L4)
+ - [text2vec.models.BidirectionalLSTM](/text2vec/models/components/recurrent.py#L5)
#### Pointwise Feedforward Components
@@ -183,9 +105,10 @@ Text2vec includes a Python API with convenient classes for handling attention an
#### General Layer Components
- - [text2vec.models.components.utils.LayerNorm](/text2vec/models/components/utils.py#L5)
+ - [text2vec.models.components.utils.LayerNorm](/text2vec/models/components/utils.py#L6)
- [text2vec.models.components.utils.TensorProjection](/text2vec/models/components/utils.py#L43)
- - [text2vec.models.components.utils.PositionalEncder](/text2vec/models/components/utils.py#L76)
+ - [text2vec.models.components.utils.PositionalEncder](/text2vec/models/components/utils.py#L77)
+ - [text2vec.models.components.utils.VariationPositionalEncoder](/text2vec/models/components/utils.py#L122)
#### Dataset Pre-processing
@@ -207,17 +130,17 @@ Text2vec includes a Python API with convenient classes for handling attention an
## Inference Demo
---
-Once a model is fully trained then a demo API can be run, along with a small
-UI to interact with the REST API. This demo attempts to use the trained model
-to condense long bodies of text into the most important sentences, using the
-inferred embedded context vectors.
-
+Trained text2vec models can be demonstrated from a lightweight app included in this repository. The demo runs extractive summarization from long bodies of text using the attention vectors of the encoding latent space. To get started, you will need to clone the repository and then install additional dependencies:
+```bash
+git clone https://github.com/brainsqueeze/text2vec.git
+cd text2vec
+pip install flask tornado
+```
To start the model server simply run
```bash
-text2vec_main --run=infer --yaml_config=/path/to/config.yml
+python demo/api.py --model_dir /absolute/saved_model/parent/dir
```
-A demonstration webpage is included in [demo](demo) at
-[context.html](demo/context.html).
+The `model_dir` CLI parameter must be an absolute path to the directory containing the `/saved_model` folder and the `tokenizer.json` file from a text2vec model with an `embed` signature. A demonstration app is served on port 9090.
## References
---
diff --git a/demo/api.py b/demo/api.py
new file mode 100644
index 0000000..e238f6b
--- /dev/null
+++ b/demo/api.py
@@ -0,0 +1,124 @@
+from typing import List, Union
+from math import pi
+import argparse
+import json
+import re
+
+from flask import Flask, request, Response, send_from_directory
+from tornado.log import enable_pretty_logging
+from tornado.httpserver import HTTPServer
+from tornado.wsgi import WSGIContainer
+from tornado.ioloop import IOLoop
+import tornado.autoreload
+import tornado
+
+import tensorflow as tf
+from tensorflow.keras import models, Model
+from tokenizers import Tokenizer
+
+app = Flask(__name__, static_url_path="", static_folder="./")
+parser = argparse.ArgumentParser()
+parser.add_argument("--model_dir", type=str, help="Directory containing serialized model and tokenizer", required=True)
+args = parser.parse_args()
+
+model: Model = models.load_model(f"{args.model_dir}/saved_model")
+tokenizer: Tokenizer = Tokenizer.from_file(f"{args.model_dir}/tokenizer.json")
+
+
+def responder(results, error, message):
+ """Boilerplate Flask response item.
+
+ Parameters
+ ----------
+ results : dict
+ API response
+ error : int
+ Error code
+ message : str
+ Message to send to the client
+
+ Returns
+ -------
+ flask.Reponse
+ """
+
+ assert isinstance(results, dict)
+ results["message"] = message
+ results = json.dumps(results, indent=2)
+
+ return Response(
+ response=results,
+ status=error,
+ mimetype="application/json"
+ )
+
+
+def tokenize(text: Union[str, List[str]]) -> List[str]:
+ if isinstance(text, str):
+ return [' '.join(tokenizer.encode(text).tokens)]
+ return [' '.join(batch.tokens) for batch in tokenizer.encode_batch(text)]
+
+
+def get_summaries(paragraphs: List[str]):
+ context = tf.concat([
+ model.embed(batch)["attention"]
+ for batch in tf.data.Dataset.from_tensor_slices(paragraphs).batch(32)
+ ], axis=0)
+ doc_vector = model.embed(tf.strings.reduce_join(paragraphs, separator=' ', keepdims=True))["attention"]
+ cosine = tf.tensordot(tf.math.l2_normalize(context, axis=1), tf.math.l2_normalize(doc_vector, axis=1), axes=[-1, 1])
+ cosine = tf.clip_by_value(cosine, -1, 1)
+ likelihoods = tf.nn.softmax(180 - tf.math.acos(cosine) * (180 / pi), axis=0)
+ return likelihoods
+
+
+@app.route("/")
+def root():
+ return send_from_directory(directory="./html/", path="index.html")
+
+
+@app.route("/summarize", methods=["GET", "POST"])
+def summarize():
+ if request.is_json:
+ payload = request.json
+ else:
+ payload = request.values
+
+ text = payload.get("text", "")
+ if not text:
+ return responder(results={}, error=400, message="No text provided")
+
+ paragraphs = [p for p in re.split(r"\n{1,}", text) if p.strip()]
+ if len(paragraphs) < 2:
+ return responder(results={"text": paragraphs}, error=400, message="Insufficient amount of text provided")
+
+ tokenized = tokenize(paragraphs)
+ likelihoods = get_summaries(tokenized)
+ likelihoods = tf.squeeze(likelihoods)
+ cond = tf.where(likelihoods > tf.math.reduce_mean(likelihoods) + tf.math.reduce_std(likelihoods)).numpy().flatten()
+ output = [{
+ "text": paragraphs[idx],
+ "score": float(likelihoods[idx])
+ } for idx in cond]
+
+ results = {"data": output}
+ return responder(results=results, error=200, message="Success")
+
+
+def serve(port: int = 9090, debug: bool = False):
+ http_server = HTTPServer(WSGIContainer(app))
+ http_server.listen(port)
+ enable_pretty_logging()
+
+ io_loop = IOLoop.current()
+ if debug:
+ tornado.autoreload.start(check_time=500)
+ print("Listening to port", port, flush=True)
+
+ try:
+ io_loop.start()
+ except KeyboardInterrupt:
+ pass
+
+
+if __name__ == '__main__':
+ serve()
diff --git a/demo/context.html b/demo/html/index.html
similarity index 78%
rename from demo/context.html
rename to demo/html/index.html
index 66b26f7..0c9353f 100644
--- a/demo/context.html
+++ b/demo/html/index.html
@@ -1,14 +1,14 @@
-
+
-
Contextual condenser demo
+
Summary extraction demo
@@ -17,7 +17,7 @@
Contextual condenser demo
-
+
diff --git a/demo/js/condense.js b/demo/js/condense.js
index 90f56b1..9f39b6d 100644
--- a/demo/js/condense.js
+++ b/demo/js/condense.js
@@ -10,16 +10,19 @@ function formatOutput (ajaxData, container) {
for (var i = 0; i < data.length; i++) {
let text = data[i].text
- let score = data[i].relevanceScore
- let lightness = (1 - score) * 100.0
+ let score = data[i].score
+ let lightness = score * 100.
- outputHtml += `
${text}
`
+ if (score > 0.5)
+ outputHtml += `
${text}
`
+ else
+ outputHtml += `
${text}
`
}
container.append(outputHtml)
}
}
-$(document).ready(function () {
+$(function () {
$('#clear').on('click', function () {
$('.text-input.main').val('')
$('.response-container').empty()
@@ -27,12 +30,12 @@ $(document).ready(function () {
$('#go').on('click', function () {
let text = $('.text-input.main').val()
- let data = { body: text }
+ let data = { text: text }
$('.response-container').empty()
$.ajax({
- url: 'http://localhost:8008/condense',
+ url: '/summarize',
data: data,
type: 'POST',
dataType: 'json',
diff --git a/demo/package.json b/demo/package.json
deleted file mode 100644
index 077b0ac..0000000
--- a/demo/package.json
+++ /dev/null
@@ -1,19 +0,0 @@
-{
- "name": "text2vec-demo",
- "version": "1.0.0",
- "description": "Demonstrates the ability of text2vec to condense large bodies of text using the contextual embeddings.",
- "main": "index.js",
- "scripts": {
- "test": "echo \"Error: no test specified\" && exit 1"
- },
- "author": "Dave Hollander",
- "license": "BSD-2-Clause",
- "devDependencies": {
- "eslint": "^5.7.0",
- "eslint-config-standard": "^12.0.0",
- "eslint-plugin-import": "^2.14.0",
- "eslint-plugin-node": "^7.0.1",
- "eslint-plugin-promise": "^4.0.1",
- "eslint-plugin-standard": "^4.0.0"
- }
-}
diff --git a/examples/trainers/wiki_lstm.py b/examples/trainers/news_lstm.py
similarity index 54%
rename from examples/trainers/wiki_lstm.py
rename to examples/trainers/news_lstm.py
index 60f9a30..5b136ba 100644
--- a/examples/trainers/wiki_lstm.py
+++ b/examples/trainers/news_lstm.py
@@ -1,4 +1,4 @@
-from typing import Generator, List, Tuple, Union
+from typing import Tuple
import os
import datasets
@@ -7,22 +7,23 @@
from tokenizers import decoders
from tokenizers import normalizers
from tokenizers import pre_tokenizers
-from tokenizers import processors
from tokenizers import trainers
-from nltk.tokenize import PunktSentenceTokenizer
-import numpy as np
import tensorflow as tf
+from tensorflow.keras import optimizers, callbacks
+from tensorflow.keras import backend as K
from tensorboard.plugins import projector
from text2vec.autoencoders import LstmAutoEncoder
from text2vec.optimizer_tools import RampUpDecaySchedule
os.environ["TOKENIZERS_PARALLELISM"] = "true"
-sent_tokenizer = PunktSentenceTokenizer().tokenize
+root = os.path.dirname(os.path.abspath(__file__))
+EMBEDDING_SIZE = 128
+MAX_SEQUENCE_LENGTH = 512
-def train_tokenizer() -> Tuple[tokenizers.Tokenizer, Generator, int]:
+def train_tokenizer() -> Tuple[tokenizers.Tokenizer, tf.data.Dataset]:
tokenizer = tokenizers.Tokenizer(models.WordPiece(unk_token=""))
tokenizer.decoder = decoders.WordPiece()
@@ -35,17 +36,13 @@ def train_tokenizer() -> Tuple[tokenizers.Tokenizer, Generator, int]:
pre_tokenizers.Whitespace(),
pre_tokenizers.Digits(individual_digits=False)
])
- tokenizer.post_processor = processors.TemplateProcessing(
- single="$A ",
- pair="$A [SEP] $B:1",
- special_tokens=[("[SEP]", 1), ("", 2), ("", 3)]
- )
- dataset = datasets.load_dataset("wikitext", "wikitext-103-raw-v1", split="test")
+ dataset = datasets.load_dataset("multi_news", split="train")
def batch_iterator(batch_size=1000):
for i in range(0, len(dataset), batch_size):
- yield dataset[i: i + batch_size]["text"]
+ for key in dataset.features:
+ yield dataset[i: i + batch_size][key]
tokenizer.train_from_iterator(
batch_iterator(),
@@ -55,14 +52,32 @@ def batch_iterator(batch_size=1000):
)
)
+ tokenizer.enable_truncation(2 * MAX_SEQUENCE_LENGTH + 3) # 2 for the [SEP], , tokens
+ tokenizer.post_processor = tokenizers.processors.TemplateProcessing(
+ single="$A",
+ pair="$A:0 [SEP] $B:1 ",
+ special_tokens=[
+ ("[SEP]", 1),
+ ("", 2),
+ ("", 3)
+ ]
+ )
+
def generator():
for record in dataset:
- if record['text'].strip() != '':
- for sentence in sent_tokenizer(record['text']):
- yield sentence
-
- data = tf.data.Dataset.from_generator(generator, output_signature=(tf.TensorSpec(shape=(None), dtype=tf.string)))
- data = data.map(tf.strings.strip, num_parallel_calls=tf.data.experimental.AUTOTUNE)
+ if record["document"] and record["summary"]:
+ enc, dec = ' '.join(tokenizer.encode(
+ record["document"],
+ pair=record["summary"]
+ ).tokens).split(' [SEP] ', maxsplit=2)
+
+ if enc.strip() != "" and dec != "":
+ yield enc, dec
+
+ data = tf.data.Dataset.from_generator(
+ generator,
+ output_signature=(tf.TensorSpec(shape=(None), dtype=tf.string), tf.TensorSpec(shape=(None), dtype=tf.string))
+ )
return tokenizer, data
@@ -71,38 +86,20 @@ def main(save_path: str):
os.mkdir(save_path)
tokenizer, data = train_tokenizer()
- tokenizer.enable_truncation(2 * 512 + 1) # encoding + decoding + [SEP] token
-
with open(f"{save_path}/metadata.tsv", "w") as tsv:
for token, _ in sorted(tokenizer.get_vocab().items(), key=lambda s: s[-1]):
tsv.write(f"{token}\n")
- def encode(x):
- def token_mapper(text: Union[str, List[str]]):
- text = text.numpy()
-
- if isinstance(text, np.ndarray):
- enc, dec = [], []
- for batch in tokenizer.encode_batch([(t.decode('utf8'), t.decode('utf8')) for t in text]):
- enc_, dec_ = ' '.join(batch.tokens).split(' [SEP] ')
- enc.append(enc_)
- dec.append(dec_)
- return (enc, dec)
-
- text = text.decode('utf8')
- enc, dec = ' '.join(tokenizer.encode(text, pair=text).tokens).split(' [SEP] ')
- return (enc, dec)
-
- return tf.py_function(token_mapper, inp=[x], Tout=[tf.string, tf.string])
-
model = LstmAutoEncoder(
- max_sequence_len=512,
- embedding_size=128,
+ max_sequence_len=MAX_SEQUENCE_LENGTH,
+ embedding_size=EMBEDDING_SIZE,
token_hash=tokenizer.get_vocab(),
- input_keep_prob=0.7,
- hidden_keep_prob=0.5
+ input_drop_rate=0.3,
+ hidden_drop_rate=0.5
)
- model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=RampUpDecaySchedule(embedding_size=128)))
+
+ scheduler = RampUpDecaySchedule(EMBEDDING_SIZE, warmup_steps=4000)
+ model.compile(optimizer=optimizers.Adam(scheduler(0).numpy()))
checkpoint = tf.train.Checkpoint(Classifier=model, optimizer=model.optimizer)
checkpoint_manager = tf.train.CheckpointManager(checkpoint, save_path, max_to_keep=3)
@@ -117,16 +114,17 @@ def token_mapper(text: Union[str, List[str]]):
embeddings_config.metadata_path = f"{save_path}/metadata.tsv"
projector.visualize_embeddings(logdir=save_path, config=config)
- data = data.map(encode, num_parallel_calls=tf.data.experimental.AUTOTUNE)
model.fit(
x=data.prefetch(8).batch(64),
callbacks=[
- tf.keras.callbacks.TensorBoard(
- log_dir=save_path,
- write_graph=True,
- update_freq=100
- ),
- tf.keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: checkpoint_manager.save())
+ callbacks.TensorBoard(log_dir=save_path, write_graph=True, update_freq=100),
+ callbacks.LambdaCallback(
+ on_epoch_end=lambda epoch, logs: checkpoint_manager.save(),
+ on_batch_end=lambda batch, logs: K.set_value(
+ model.optimizer.lr,
+ K.get_value(scheduler(model.optimizer.iterations))
+ )
+ )
],
epochs=1
)
@@ -142,4 +140,4 @@ def token_mapper(text: Union[str, List[str]]):
if __name__ == '__main__':
- main(save_path='./wiki_t2v')
+ main(save_path=f'{root}/../../multi_news_t2v_sequential')
diff --git a/examples/trainers/wiki_transformer.py b/examples/trainers/news_transformer.py
similarity index 53%
rename from examples/trainers/wiki_transformer.py
rename to examples/trainers/news_transformer.py
index f0dde3e..95941f2 100644
--- a/examples/trainers/wiki_transformer.py
+++ b/examples/trainers/news_transformer.py
@@ -1,4 +1,4 @@
-from typing import Generator, List, Tuple, Union
+from typing import Tuple
import os
import datasets
@@ -7,22 +7,23 @@
from tokenizers import decoders
from tokenizers import normalizers
from tokenizers import pre_tokenizers
-from tokenizers import processors
from tokenizers import trainers
-from nltk.tokenize import PunktSentenceTokenizer
-import numpy as np
import tensorflow as tf
+from tensorflow.keras import optimizers, callbacks
+from tensorflow.keras import backend as K
from tensorboard.plugins import projector
from text2vec.autoencoders import TransformerAutoEncoder
from text2vec.optimizer_tools import RampUpDecaySchedule
os.environ["TOKENIZERS_PARALLELISM"] = "true"
-sent_tokenizer = PunktSentenceTokenizer().tokenize
+root = os.path.dirname(os.path.abspath(__file__))
+EMBEDDING_SIZE = 128
+MAX_SEQUENCE_LENGTH = 512
-def train_tokenizer() -> Tuple[tokenizers.Tokenizer, Generator, int]:
+def train_tokenizer() -> Tuple[tokenizers.Tokenizer, tf.data.Dataset]:
tokenizer = tokenizers.Tokenizer(models.WordPiece(unk_token=""))
tokenizer.decoder = decoders.WordPiece()
@@ -35,17 +36,13 @@ def train_tokenizer() -> Tuple[tokenizers.Tokenizer, Generator, int]:
pre_tokenizers.Whitespace(),
pre_tokenizers.Digits(individual_digits=False)
])
- tokenizer.post_processor = processors.TemplateProcessing(
- single="$A ",
- pair="$A [SEP] $B:1",
- special_tokens=[("[SEP]", 1), ("", 2), ("", 3)]
- )
- dataset = datasets.load_dataset("wikitext", "wikitext-103-raw-v1", split="test")
+ dataset = datasets.load_dataset("multi_news", split="test")
def batch_iterator(batch_size=1000):
for i in range(0, len(dataset), batch_size):
- yield dataset[i: i + batch_size]["text"]
+ for key in dataset.features:
+ yield dataset[i: i + batch_size][key]
tokenizer.train_from_iterator(
batch_iterator(),
@@ -55,14 +52,32 @@ def batch_iterator(batch_size=1000):
)
)
+ tokenizer.enable_truncation(2 * MAX_SEQUENCE_LENGTH + 3) # 2 for the [SEP], , tokens
+ tokenizer.post_processor = tokenizers.processors.TemplateProcessing(
+ single="$A",
+ pair="$A:0 [SEP] $B:1 ",
+ special_tokens=[
+ ("[SEP]", 1),
+ ("", 2),
+ ("", 3)
+ ]
+ )
+
def generator():
for record in dataset:
- if record['text'].strip() != '':
- for sentence in sent_tokenizer(record['text']):
- yield sentence
-
- data = tf.data.Dataset.from_generator(generator, output_signature=(tf.TensorSpec(shape=(None), dtype=tf.string)))
- data = data.map(tf.strings.strip, num_parallel_calls=tf.data.experimental.AUTOTUNE)
+ if record["document"] and record["summary"]:
+ enc, dec = ' '.join(tokenizer.encode(
+ record["document"],
+ pair=record["summary"]
+ ).tokens).split(' [SEP] ', maxsplit=2)
+
+ if enc.strip() != "" and dec != "":
+ yield enc, dec
+
+ data = tf.data.Dataset.from_generator(
+ generator,
+ output_signature=(tf.TensorSpec(shape=(None), dtype=tf.string), tf.TensorSpec(shape=(None), dtype=tf.string))
+ )
return tokenizer, data
@@ -71,38 +86,20 @@ def main(save_path: str):
os.mkdir(save_path)
tokenizer, data = train_tokenizer()
- tokenizer.enable_truncation(2 * 512 + 1) # encoding + decoding + [SEP] token
-
with open(f"{save_path}/metadata.tsv", "w") as tsv:
for token, _ in sorted(tokenizer.get_vocab().items(), key=lambda s: s[-1]):
tsv.write(f"{token}\n")
- def encode(x):
- def token_mapper(text: Union[str, List[str]]):
- text = text.numpy()
-
- if isinstance(text, np.ndarray):
- enc, dec = [], []
- for batch in tokenizer.encode_batch([(t.decode('utf8'), t.decode('utf8')) for t in text]):
- enc_, dec_ = ' '.join(batch.tokens).split(' [SEP] ')
- enc.append(enc_)
- dec.append(dec_)
- return (enc, dec)
-
- text = text.decode('utf8')
- enc, dec = ' '.join(tokenizer.encode(text, pair=text).tokens).split(' [SEP] ')
- return (enc, dec)
-
- return tf.py_function(token_mapper, inp=[x], Tout=[tf.string, tf.string])
-
model = TransformerAutoEncoder(
- max_sequence_len=512,
- embedding_size=128,
+ max_sequence_len=MAX_SEQUENCE_LENGTH,
+ embedding_size=EMBEDDING_SIZE,
token_hash=tokenizer.get_vocab(),
- input_keep_prob=0.7,
- hidden_keep_prob=0.5
+ input_drop_rate=0.2,
+ hidden_drop_rate=0.3
)
- model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=RampUpDecaySchedule(embedding_size=128)))
+
+ scheduler = RampUpDecaySchedule(EMBEDDING_SIZE, warmup_steps=4000)
+ model.compile(optimizer=optimizers.Adam(scheduler(0).numpy()))
checkpoint = tf.train.Checkpoint(Classifier=model, optimizer=model.optimizer)
checkpoint_manager = tf.train.CheckpointManager(checkpoint, save_path, max_to_keep=3)
@@ -117,18 +114,19 @@ def token_mapper(text: Union[str, List[str]]):
embeddings_config.metadata_path = f"{save_path}/metadata.tsv"
projector.visualize_embeddings(logdir=save_path, config=config)
- data = data.map(encode, num_parallel_calls=tf.data.experimental.AUTOTUNE)
model.fit(
x=data.prefetch(8).batch(64),
callbacks=[
- tf.keras.callbacks.TensorBoard(
- log_dir=save_path,
- write_graph=True,
- update_freq=100
- ),
- tf.keras.callbacks.LambdaCallback(on_epoch_end=lambda epoch, logs: checkpoint_manager.save())
+ callbacks.TensorBoard(log_dir=save_path, write_graph=True, update_freq=100),
+ callbacks.LambdaCallback(
+ on_epoch_end=lambda epoch, logs: checkpoint_manager.save(),
+ on_batch_end=lambda batch, logs: K.set_value(
+ model.optimizer.lr,
+ K.get_value(scheduler(model.optimizer.iterations))
+ )
+ )
],
- epochs=1
+ epochs=2
)
model.save(
@@ -142,4 +140,4 @@ def token_mapper(text: Union[str, List[str]]):
if __name__ == '__main__':
- main(save_path='./wiki_t2v')
+ main(save_path=f'{root}/../../multi_news_t2v')
diff --git a/setup.py b/setup.py
index 37d2ef0..7d85d2a 100644
--- a/setup.py
+++ b/setup.py
@@ -3,7 +3,7 @@
setup(
name="text2vec",
- version="1.4.0",
+ version="2.0.0",
description="Building blocks for text vectorization and embedding",
author="Dave Hollander",
author_url="https://github.com/brainsqueeze",
@@ -18,15 +18,8 @@
extras_require=dict(
serving=[
"flask",
- "flask-cors",
- "nltk",
"tornado"
]
),
packages=find_packages(exclude=["bin"]),
- entry_points={
- "console_scripts": [
- "text2vec_main=text2vec.bin.main:main",
- ],
- }
)
diff --git a/tests/lstm_auto_enc_test_fit.py b/tests/lstm_auto_enc_test_fit.py
index c56da83..4950472 100644
--- a/tests/lstm_auto_enc_test_fit.py
+++ b/tests/lstm_auto_enc_test_fit.py
@@ -92,8 +92,8 @@ def token_mapper(text: Union[str, List[str]]):
max_sequence_len=256,
embedding_size=256,
token_hash=tokenizer.get_vocab(),
- input_keep_prob=0.7,
- hidden_keep_prob=0.5
+ input_drop_rate=0.3,
+ hidden_drop_rate=0.5
)
model.compile(optimizer=tf.keras.optimizers.Adam(0.01), run_eagerly=True)
model.fit(x=data.prefetch(10).batch(16), epochs=1)
diff --git a/tests/transformer_auto_enc_test_fit.py b/tests/transformer_auto_enc_test_fit.py
index a4c8265..ed2c796 100644
--- a/tests/transformer_auto_enc_test_fit.py
+++ b/tests/transformer_auto_enc_test_fit.py
@@ -1,5 +1,5 @@
-import os
from typing import List, Union
+import os
import datasets
import tokenizers
@@ -12,6 +12,7 @@
import numpy as np
import tensorflow as tf
+from tensorflow.keras import optimizers
from text2vec.autoencoders import TransformerAutoEncoder
root = os.path.dirname(os.path.abspath(__file__))
@@ -85,21 +86,22 @@ def token_mapper(text: Union[str, List[str]]):
return tf.py_function(token_mapper, inp=[x], Tout=[tf.string, tf.string])
data = tf.data.Dataset.from_generator(data_gen, output_signature=(tf.TensorSpec(shape=(None), dtype=tf.string)))
- data = data.map(tf.strings.strip, num_parallel_calls=tf.data.experimental.AUTOTUNE)
- data = data.map(encode, num_parallel_calls=tf.data.experimental.AUTOTUNE)
+ data = data.map(tf.strings.strip, num_parallel_calls=tf.data.AUTOTUNE)
+ data = data.map(encode, num_parallel_calls=tf.data.AUTOTUNE)
model = TransformerAutoEncoder(
max_sequence_len=512,
embedding_size=128,
token_hash=tokenizer.get_vocab(),
- input_keep_prob=0.7,
- hidden_keep_prob=0.5
+ input_drop_rate=0.3,
+ hidden_drop_rate=0.5
)
- model.compile(optimizer=tf.keras.optimizers.Adam(), run_eagerly=True)
- model.fit(x=data.prefetch(10).batch(16), epochs=1)
+ model.compile(optimizer=optimizers.Adam(1e-4), run_eagerly=False)
+ model.fit(x=data.prefetch(10).batch(16), epochs=10)
- model(['here is a sentence', 'try another one'])
- model.predict(['here is a sentence', 'try another one'])
+ x = model.embed(['this is about physics', 'this is not about physics'])["attention"]
+ x = tf.linalg.l2_normalize(x, axis=-1)
+ print(x.numpy() @ x.numpy().T)
return model
diff --git a/text2vec/__init__.py b/text2vec/__init__.py
index cf966ff..8a065e2 100644
--- a/text2vec/__init__.py
+++ b/text2vec/__init__.py
@@ -2,12 +2,10 @@
from . import autoencoders
from . import preprocessing
from . import optimizer_tools
-from . import training_tools
__all__ = [
'models',
'autoencoders',
'preprocessing',
- 'optimizer_tools',
- 'training_tools'
+ 'optimizer_tools'
]
diff --git a/text2vec/autoencoders.py b/text2vec/autoencoders.py
index 8997ecc..3c46748 100644
--- a/text2vec/autoencoders.py
+++ b/text2vec/autoencoders.py
@@ -1,18 +1,15 @@
# pylint: disable=too-many-ancestors
-from typing import Dict
+from typing import Dict, Optional
import tensorflow as tf
+from tensorflow.keras import layers, Model
-from text2vec.models.components.feeder import Tokenizer
-from text2vec.models.components.text_inputs import TokenEmbed
-from text2vec.models.components.text_inputs import Embed
-from text2vec.models.transformer import TransformerEncoder
-from text2vec.models.transformer import TransformerDecoder
-from text2vec.models.sequential import RecurrentEncoder
-from text2vec.models.sequential import RecurrentDecoder
+from text2vec.models.components.text_inputs import TokenEmbed, Embed, Tokenizer
+from text2vec.models.transformer import TransformerEncoder, TransformerDecoder
+from text2vec.models.sequential import RecurrentEncoder, RecurrentDecoder
-class TransformerAutoEncoder(tf.keras.Model):
+class TransformerAutoEncoder(Model):
"""Wrapper model class to combine the transformer based encoder-decoder training pipeline.
Parameters
@@ -27,12 +24,12 @@ class TransformerAutoEncoder(tf.keras.Model):
Size of the vocabulary. Set this if pre-computing token IDs to pass to the model, by default None
unknown_token : str, optional
The placeholder value for OOV terms, by default ''
- sep : int, optional
+ sep : str, optional
Token separator by default ' '
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
- hidden_keep_prob : float, optional
- Hidden states dropout. Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
+ hidden_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Raises
------
@@ -54,8 +51,9 @@ class TransformerAutoEncoder(tf.keras.Model):
"""
def __init__(self, max_sequence_len: int, embedding_size: int,
- token_hash: dict = None, vocab_size: int = None, unknown_token: str = '', sep: int = ' ',
- input_keep_prob: float = 1.0, hidden_keep_prob: float = 1.0):
+ token_hash: Optional[dict] = None, vocab_size: Optional[int] = None,
+ unknown_token: str = '', sep: str = ' ',
+ input_drop_rate: float = 0, hidden_drop_rate: float = 0):
super().__init__()
if token_hash is None and vocab_size is None:
@@ -64,8 +62,8 @@ def __init__(self, max_sequence_len: int, embedding_size: int,
params = dict(
max_sequence_len=max_sequence_len,
embedding_size=embedding_size,
- input_keep_prob=input_keep_prob,
- hidden_keep_prob=hidden_keep_prob
+ input_drop_rate=input_drop_rate,
+ hidden_drop_rate=hidden_drop_rate
)
if token_hash is not None:
@@ -77,54 +75,51 @@ def __init__(self, max_sequence_len: int, embedding_size: int,
unknown_token=unknown_token
)
else:
- self.tokenizer = tf.keras.layers.Lambda(lambda x: x) # this is only for consistency, identity map
+ self.tokenizer = layers.Lambda(lambda x: x) # this is only for consistency, identity map
self.embed_layer = Embed(
vocab_size=vocab_size,
embedding_size=embedding_size,
max_sequence_len=max_sequence_len
)
- self.encode_layer = TransformerEncoder(n_stacks=1, layers=8, **params)
- self.decode_layer = TransformerDecoder(n_stacks=1, layers=8, **params)
-
- def call(self, tokens, **kwargs):
- tokens = self.tokenizer(tokens)
- x_enc, enc_mask, _ = self.embed_layer(tokens, **kwargs)
- x_enc, context = self.encode_layer(x_enc, mask=enc_mask, training=kwargs.get("training", False))
- return x_enc, context, enc_mask
+ self.encode_layer = TransformerEncoder(n_stacks=1, num_layers=8, **params)
+ self.decode_layer = TransformerDecoder(n_stacks=1, num_layers=8, **params)
+
+ def call(self, inputs, training: bool = False): # pylint: disable=missing-function-docstring
+ encoding_text = inputs[0]
+ decoding_text = inputs[1] if len(inputs) > 1 else encoding_text
+
+ encode_tokens = self.tokenizer(encoding_text)
+ x_embed, mask_encode, _ = self.embed_layer(encode_tokens, training=training)
+ x_encode, context = self.encode_layer(x_embed, mask=mask_encode, training=training)
+
+ decode_tokens = self.tokenizer(decoding_text)
+ x_decode, mask_decode, _ = self.embed_layer(decode_tokens[:, :-1]) # skip
+ x_decode = self.decode_layer(
+ x_enc=x_encode,
+ x_dec=x_decode,
+ dec_mask=mask_decode,
+ context=context,
+ attention=self.encode_layer.attention,
+ training=training
+ )
- def train_step(self, data):
- encoding_tok, decoding_tok = data
- decoding_tok = self.tokenizer(decoding_tok)
+ return x_embed, x_decode, mask_decode, decode_tokens
+ def train_step(self, data): # pylint: disable=missing-function-docstring
with tf.GradientTape() as tape:
- with tf.name_scope('Encoding'):
- x_enc, context, enc_mask = self(encoding_tok, training=True)
-
- with tf.name_scope('Decoding'):
- targets = decoding_tok[:, 1:] # skip the token with the slice on axis=1
- if isinstance(self.embed_layer, TokenEmbed):
- targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
- targets = self.embed_layer.slicer(targets)
-
- decoding_tok, dec_mask, _ = self.embed_layer(decoding_tok[:, :-1]) # skip
- decoding_tok = self.decode_layer(
- x_enc=x_enc,
- enc_mask=enc_mask,
- x_dec=decoding_tok,
- dec_mask=dec_mask,
- context=context,
- attention=self.encode_layer.attention,
- training=True
- )
- decoding_tok = tf.tensordot(decoding_tok, self.embed_layer.embeddings, axes=[2, 1])
-
- loss = loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=decoding_tok,
+ _, x_decode, mask_decode, decode_tokens = self(data, training=True)
+
+ targets = decode_tokens[:, 1:] # skip the token with the slice on axis=1
+ if isinstance(self.embed_layer, TokenEmbed):
+ targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
+ targets = self.embed_layer.slicer(targets)
+
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits=tf.tensordot(x_decode, self.embed_layer.embeddings, axes=[2, 1]),
labels=targets.to_tensor(default_value=0)
)
- loss = loss * dec_mask
- loss = tf.math.reduce_sum(loss, axis=1)
+ loss = loss * mask_decode
loss = tf.reduce_mean(loss)
gradients = tape.gradient(loss, self.trainable_variables)
@@ -134,36 +129,21 @@ def train_step(self, data):
return {"loss": loss, 'learning_rate': self.optimizer.learning_rate(self.optimizer.iterations)}
return {"loss": loss, 'learning_rate': self.optimizer.learning_rate}
- def test_step(self, data):
- encoding_tok, decoding_tok = data
- decoding_tok = self.tokenizer(decoding_tok)
+ def test_step(self, data): # pylint: disable=missing-function-docstring
+ _, x_decode, mask_decode, decode_tokens = self(data, training=False)
- with tf.name_scope('Encoding'):
- x_enc, context, enc_mask = self(encoding_tok, training=False)
-
- with tf.name_scope('Decoding'):
- targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, decoding_tok[:, 1:]) # skip
- targets = self.embed_layer.slicer(targets)
-
- decoding_tok, dec_mask, _ = self.embed_layer(decoding_tok[:, :-1]) # skip
- decoding_tok = self.decode_layer(
- x_enc=x_enc,
- enc_mask=enc_mask,
- x_dec=decoding_tok,
- dec_mask=dec_mask,
- context=context,
- attention=self.encode_layer.attention,
- training=False
- )
- decoding_tok = tf.tensordot(decoding_tok, self.embed_layer.embeddings, axes=[2, 1])
+ targets = decode_tokens[:, 1:] # skip the token with the slice on axis=1
+ if isinstance(self.embed_layer, TokenEmbed):
+ targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
+ targets = self.embed_layer.slicer(targets)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=decoding_tok,
+ logits=tf.tensordot(x_decode, self.embed_layer.embeddings, axes=[2, 1]),
labels=targets.to_tensor(default_value=0)
)
- loss = loss * dec_mask
- loss = tf.math.reduce_sum(loss, axis=1)
+ loss = loss * mask_decode
loss = tf.reduce_mean(loss)
+
return {"loss": loss, **{m.name: m.result() for m in self.metrics}}
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
@@ -182,11 +162,13 @@ def embed(self, sentences) -> Dict[str, tf.Tensor]:
and (batch_size, max_sequence_len, embedding_size) respectively.
"""
- sequences, attention, _ = self(sentences, training=False)
- return {"sequences": sequences, "attention": attention}
+ tokens = self.tokenizer(sentences)
+ x, mask, _ = self.embed_layer(tokens, training=False)
+ x, context = self.encode_layer(x, mask=mask, training=False)
+ return {"sequences": x, "attention": context}
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def token_embed(self, sentences) -> Dict[str, tf.Tensor]:
+ def token_embed(self, sentences) -> Dict[str, tf.RaggedTensor]:
"""Takes batches of free text and returns word embeddings along with the associate token.
Parameters
@@ -196,19 +178,16 @@ def token_embed(self, sentences) -> Dict[str, tf.Tensor]:
Returns
-------
- Dict[str, tf.Tensor]
- Padded tokens and embedding vectors with shapes (batch_size, max_sequence_len)
- and (batch_size, max_sequence_len, embedding_size) respectively.
+ Dict[str, tf.RaggedTensor]
+ Ragged tokens and embedding tensors with shapes (batch_size, None)
+ and (batch_size, None, embedding_size) respectively.
"""
tokens = self.tokenizer(sentences)
- return {
- "tokens": tokens.to_tensor('>'),
- "embeddings": self.embed_layer.get_embedding(tokens).to_tensor(0)
- }
+ return {"tokens": tokens, "embeddings": self.embed_layer.get_embedding(tokens)}
-class LstmAutoEncoder(tf.keras.Model):
+class LstmAutoEncoder(Model):
"""Wrapper model class to combine the LSTM based encoder-decoder training pipeline.
Parameters
@@ -225,12 +204,12 @@ class LstmAutoEncoder(tf.keras.Model):
Size of the vocabulary. Set this if pre-computing token IDs to pass to the model, by default None
unknown_token : str, optional
The placeholder value for OOV terms, by default ''
- sep : int, optional
+ sep : str, optional
Token separator by default ' '
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
- hidden_keep_prob : float, optional
- Hidden states dropout. Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
+ hidden_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Raises
------
@@ -252,8 +231,9 @@ class LstmAutoEncoder(tf.keras.Model):
"""
def __init__(self, max_sequence_len: int, embedding_size: int, num_hidden: int = 64,
- token_hash: dict = None, vocab_size: int = None, unknown_token: str = '', sep: int = ' ',
- input_keep_prob: float = 1.0, hidden_keep_prob: float = 1.0):
+ token_hash: Optional[dict] = None, vocab_size: Optional[int] = None,
+ unknown_token: str = '', sep: str = ' ',
+ input_drop_rate: float = 0., hidden_drop_rate: float = 0.):
super().__init__()
if token_hash is None and vocab_size is None:
@@ -262,8 +242,8 @@ def __init__(self, max_sequence_len: int, embedding_size: int, num_hidden: int =
params = dict(
max_sequence_len=max_sequence_len,
embedding_size=embedding_size,
- input_keep_prob=input_keep_prob,
- hidden_keep_prob=hidden_keep_prob
+ input_drop_rate=input_drop_rate,
+ hidden_drop_rate=hidden_drop_rate
)
if token_hash is not None:
@@ -275,7 +255,7 @@ def __init__(self, max_sequence_len: int, embedding_size: int, num_hidden: int =
unknown_token=unknown_token
)
else:
- self.tokenizer = tf.keras.layers.Lambda(lambda x: x) # this is only for consistency, identity map
+ self.tokenizer = layers.Lambda(lambda x: x) # this is only for consistency, identity map
self.embed_layer = Embed(
vocab_size=vocab_size,
embedding_size=embedding_size,
@@ -285,44 +265,42 @@ def __init__(self, max_sequence_len: int, embedding_size: int, num_hidden: int =
self.encode_layer = RecurrentEncoder(num_hidden=num_hidden, **params)
self.decode_layer = RecurrentDecoder(num_hidden=num_hidden, **params)
- def call(self, tokens, **kwargs):
- tokens = self.tokenizer(tokens)
- x_enc, enc_mask, _ = self.embed_layer(tokens, **kwargs)
- x_enc, context, *states = self.encode_layer(x_enc, mask=enc_mask, training=kwargs.get("training", False))
- return x_enc, context, enc_mask, states
+ def call(self, inputs, training: bool = False): # pylint: disable=missing-function-docstring
+ encoding_text = inputs[0]
+ decoding_text = inputs[1] if len(inputs) > 1 else encoding_text
+
+ encode_tokens = self.tokenizer(encoding_text)
+ x_embed, mask_encode, _ = self.embed_layer(encode_tokens, training=training)
+ x_encode, context, *states = self.encode_layer(x_embed, mask=mask_encode, training=training)
+
+ decode_tokens = self.tokenizer(decoding_text)
+ x_decode, mask_decode, _ = self.embed_layer(decode_tokens[:, :-1]) # skip
+ x_decode = self.decode_layer(
+ x_enc=x_encode,
+ x_dec=x_decode,
+ dec_mask=mask_decode,
+ context=context,
+ # attention=self.encode_layer.attention,
+ initial_state=states,
+ training=training
+ )
- def train_step(self, data):
- encoding_tok, decoding_tok = data
- decoding_tok = self.tokenizer(decoding_tok)
+ return x_embed, x_decode, mask_decode, decode_tokens
+ def train_step(self, data): # pylint: disable=missing-function-docstring
with tf.GradientTape() as tape:
- with tf.name_scope('Encoding'):
- x_enc, context, enc_mask, states = self(encoding_tok, training=True)
-
- with tf.name_scope('Decoding'):
- targets = decoding_tok[:, 1:] # skip the token with the slice on axis=1
- if isinstance(self.embed_layer, TokenEmbed):
- targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
- targets = self.embed_layer.slicer(targets)
-
- decoding_tok, dec_mask, _ = self.embed_layer(decoding_tok[:, :-1])
- decoding_tok = self.decode_layer(
- x_enc=x_enc,
- enc_mask=enc_mask,
- x_dec=decoding_tok,
- dec_mask=dec_mask,
- context=context,
- initial_state=states,
- training=True
- )
- decoding_tok = tf.tensordot(decoding_tok, self.embed_layer.embeddings, axes=[2, 1])
-
- loss = loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=decoding_tok,
+ _, x_decode, mask_decode, decode_tokens = self(data, training=True)
+
+ targets = decode_tokens[:, 1:] # skip the token with the slice on axis=1
+ if isinstance(self.embed_layer, TokenEmbed):
+ targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
+ targets = self.embed_layer.slicer(targets)
+
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits=tf.tensordot(x_decode, self.embed_layer.embeddings, axes=[2, 1]),
labels=targets.to_tensor(default_value=0)
)
- loss = loss * dec_mask
- loss = tf.math.reduce_sum(loss, axis=1)
+ loss = loss * mask_decode
loss = tf.reduce_mean(loss)
gradients = tape.gradient(loss, self.trainable_variables)
@@ -332,37 +310,19 @@ def train_step(self, data):
return {"loss": loss, 'learning_rate': self.optimizer.learning_rate(self.optimizer.iterations)}
return {"loss": loss, 'learning_rate': self.optimizer.learning_rate}
- def test_step(self, data):
- encoding_tok, decoding_tok = data
- decoding_tok = self.tokenizer(decoding_tok)
-
- with tf.name_scope('Encoding'):
- x_enc, context, enc_mask, states = self(encoding_tok, training=False)
-
- with tf.name_scope('Decoding'):
- targets = decoding_tok[:, 1:] # skip the token with the slice on axis=1
- if isinstance(self.embed_layer, TokenEmbed):
- targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
- targets = self.embed_layer.slicer(targets)
+ def test_step(self, data): # pylint: disable=missing-function-docstring
+ _, x_decode, mask_decode, decode_tokens = self(data, training=False)
- decoding_tok, dec_mask, _ = self.embed_layer(decoding_tok[:, :-1])
- decoding_tok = self.decode_layer(
- x_enc=x_enc,
- enc_mask=enc_mask,
- x_dec=decoding_tok,
- dec_mask=dec_mask,
- context=context,
- initial_state=states,
- training=False
- )
- decoding_tok = tf.tensordot(decoding_tok, self.embed_layer.embeddings, axes=[2, 1])
+ targets = decode_tokens[:, 1:] # skip the token with the slice on axis=1
+ if isinstance(self.embed_layer, TokenEmbed):
+ targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
+ targets = self.embed_layer.slicer(targets)
- loss = loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=decoding_tok,
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits=tf.tensordot(x_decode, self.embed_layer.embeddings, axes=[2, 1]),
labels=targets.to_tensor(default_value=0)
)
- loss = loss * dec_mask
- loss = tf.math.reduce_sum(loss, axis=1)
+ loss = loss * mask_decode
loss = tf.reduce_mean(loss)
return {"loss": loss, **{m.name: m.result() for m in self.metrics}}
@@ -383,11 +343,13 @@ def embed(self, sentences) -> Dict[str, tf.Tensor]:
and (batch_size, max_sequence_len, embedding_size) respectively.
"""
- sequences, attention, *args = self(sentences, training=False)
- return {"sequences": sequences, "attention": attention}
+ tokens = self.tokenizer(sentences)
+ x, mask, _ = self.embed_layer(tokens, training=False)
+ x, context, *_ = self.encode_layer(x, mask=mask, training=False)
+ return {"sequences": x, "attention": context}
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def token_embed(self, sentences) -> Dict[str, tf.Tensor]:
+ def token_embed(self, sentences) -> Dict[str, tf.RaggedTensor]:
"""Takes batches of free text and returns word embeddings along with the associate token.
Parameters
@@ -397,13 +359,13 @@ def token_embed(self, sentences) -> Dict[str, tf.Tensor]:
Returns
-------
- Dict[str, tf.Tensor]
- Padded tokens and embedding vectors with shapes (batch_size, max_sequence_len)
- and (batch_size, max_sequence_len, embedding_size) respectively.
+ Dict[str, tf.RaggedTensor]
+ Ragged tokens and embedding tensors with shapes (batch_size, None)
+ and (batch_size, None, embedding_size) respectively.
"""
tokens = self.tokenizer(sentences)
return {
- "tokens": tokens.to_tensor('>'),
- "embeddings": self.embed_layer.get_embedding(tokens).to_tensor(0)
+ "tokens": tokens,
+ "embeddings": self.embed_layer.get_embedding(tokens)
}
diff --git a/text2vec/bin/__init__.py b/text2vec/bin/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/text2vec/bin/main.py b/text2vec/bin/main.py
deleted file mode 100644
index 1086eca..0000000
--- a/text2vec/bin/main.py
+++ /dev/null
@@ -1,266 +0,0 @@
-import itertools
-import argparse
-import os
-
-import yaml
-
-import numpy as np
-import tensorflow as tf
-from tensorboard.plugins import projector
-
-from text2vec.training_tools import EncodingModel
-from text2vec.training_tools import ServingModel
-from text2vec.training_tools import sequence_cost
-from text2vec.training_tools import vector_cost
-from text2vec.optimizer_tools import RampUpDecaySchedule
-from text2vec.preprocessing.text import clean_and_split
-from text2vec.preprocessing import utils as data_tools
-from . import utils
-
-root = os.path.dirname(os.path.abspath(__file__))
-os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
-
-
-def train(model_folder, num_tokens=10000, embedding_size=256, num_hidden=128, max_allowed_seq=-1,
- layers=8, batch_size=32, num_epochs=10, data_files=None, model_path=".", use_attention=False,
- eval_sentences=None, orthogonal_cost=False):
- """Core training algorithm.
-
- Parameters
- ----------
- model_folder : str
- Name of the folder to create for the trained model
- num_tokens : int, optional
- Number of vocab tokens to keep from the training corpus, by default 10000
- embedding_size : int, optional
- Size of the word-embedding dimensions, by default 256
- num_hidden : int, optional
- Number of hidden model dimensions, by default 128
- max_allowed_seq : int, optional
- The maximum sequence length allowed, model will truncate if longer, by default -1
- layers : int, optional
- Number of multi-head attention mechanisms for transformer model, by default 8
- batch_size : int, optional
- Size of each mini-batch, by default 32
- num_epochs : int, optional
- Number of training epochs, by default 10
- data_files : list, optional
- List of absolute paths to training data sets, by default None
- model_path : str, optional
- Valid path to where the model will be saved, by default "."
- use_attention : bool, optional
- Set to True to use the self-attention only model, by default False
- eval_sentences : List, optional
- List of sentences to check the context angles, by default None
- orthogonal_cost : bool, optional
- Set to True to add a cost to mutually parallel context vector, by default False
-
- Returns
- -------
- str
- Model checkpoint file path.
- """
-
- # GPU config
- for gpu in tf.config.experimental.list_physical_devices('GPU'):
- # tf.config.experimental.set_memory_growth(gpu, True)
- tf.config.experimental.set_memory_growth(gpu, False)
- tf.config.set_soft_device_placement(True)
-
- log_dir = f"{model_path}/{model_folder}" if model_path else f"{root}/../../text2vec/{model_folder}"
- if not os.path.exists(log_dir):
- os.mkdir(log_dir)
-
- utils.log("Fetching corpus and creating data pipeline")
- corpus = data_tools.load_text_files(data_files=data_files, max_length=max_allowed_seq)
-
- utils.log("Fitting embedding lookup", end="...")
- hash_map, max_seq_len, train_set_size = data_tools.get_top_tokens(corpus, n_top=num_tokens)
- print(f"{train_set_size} sentences. max sequence length: {max_seq_len}")
-
- with open(log_dir + "/metadata.tsv", "w") as tsv:
- for token, _ in sorted(hash_map.items(), key=lambda s: s[-1]):
- # since tensorflow converts strings to byets we will decode from UTF-8 here for display purposes
- tsv.write(f"{token.decode('utf8', 'replace')}\n")
- tsv.write("\n")
-
- utils.log("Building computation graph")
- log_step = (train_set_size // batch_size) // 25
- dims = embedding_size
-
- params = dict(
- max_sequence_len=max_seq_len,
- embedding_size=dims,
- input_keep_prob=0.9,
- hidden_keep_prob=0.75
- )
- if use_attention:
- model = EncodingModel(token_hash=hash_map, layers=layers, **params)
- else:
- model = EncodingModel(token_hash=hash_map, num_hidden=num_hidden, recurrent=True, **params)
-
- warmup_steps = max(train_set_size // batch_size, 4000)
- learning_rate = RampUpDecaySchedule(embedding_size=dims, warmup_steps=warmup_steps)
- optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
- train_loss = tf.keras.metrics.Mean('train-loss', dtype=tf.float32)
-
- def compute_loss(sentences):
- y_hat, time_steps, targets, vectors = model(sentences, training=True, return_vectors=True)
- loss_val = sequence_cost(
- target_sequences=targets,
- sequence_logits=y_hat[:, :time_steps],
- num_labels=model.embed_layer.num_labels,
- smoothing=False
- )
-
- if orthogonal_cost:
- return loss_val + vector_cost(context_vectors=vectors)
- return loss_val
-
- @tf.function(input_signature=[tf.TensorSpec(shape=(None,), dtype=tf.string)])
- def train_step(sentences):
- loss_val = compute_loss(sentences)
- gradients = tf.gradients(loss_val, model.trainable_variables)
- optimizer.apply_gradients(zip(gradients, model.trainable_variables))
- train_loss(loss_val) # log the loss value to TensorBoard
-
- model_file_name = None
- if isinstance(eval_sentences, list) and len(eval_sentences) > 1:
- test_sentences = eval_sentences
- else:
- test_sentences = ["The movie was great!", "The movie was terrible."]
- test_tokens = [' '.join(clean_and_split(text)) for text in test_sentences]
-
- summary_writer_train = tf.summary.create_file_writer(log_dir + "/training")
- summary_writer_dev = tf.summary.create_file_writer(log_dir + "/validation")
- checkpoint = tf.train.Checkpoint(EmbeddingModel=model, optimizer=optimizer)
- checkpoint_manager = tf.train.CheckpointManager(checkpoint, log_dir, max_to_keep=5)
-
- # add word labels to the projector
- config = projector.ProjectorConfig()
- # pylint: disable=no-member
- embeddings_config = config.embeddings.add()
-
- checkpoint_manager.save()
- reader = tf.train.load_checkpoint(log_dir)
- embeddings_config.tensor_name = [key for key in reader.get_variable_to_shape_map() if "embedding" in key][0]
- embeddings_config.metadata_path = log_dir + "/metadata.tsv"
- projector.visualize_embeddings(logdir=log_dir + "/training", config=config)
-
- step = 1
- for epoch in range(num_epochs):
- try:
- corpus = corpus.unbatch()
- except ValueError:
- print("Corpus not batched")
- corpus = corpus.shuffle(train_set_size)
- corpus = corpus.batch(batch_size).prefetch(10) # pre-fetch 10 batches for queuing
-
- print(f"\t Epoch: {epoch + 1}")
- i = 1
- train_loss.reset_states()
-
- for x in corpus:
- if step == 1:
- tf.summary.trace_on(graph=True, profiler=False)
-
- train_step(x)
- with summary_writer_train.as_default():
- if step == 1:
- tf.summary.trace_export(name='graph', step=1, profiler_outdir=log_dir)
- tf.summary.trace_off()
- summary_writer_train.flush()
-
- if i % log_step == 0:
- print(f"\t\t iteration {i} - loss: {train_loss.result()}")
- tf.summary.scalar(name='loss', data=train_loss.result(), step=step)
- tf.summary.scalar(name='learning-rate', data=learning_rate.callback(step=step), step=step)
- summary_writer_train.flush()
- train_loss.reset_states()
- i += 1
- step += 1
-
- vectors = model.embed(test_tokens)
- angles = utils.compute_angles(vectors.numpy())
-
- with summary_writer_dev.as_default():
- for idx, (i, j) in enumerate(itertools.combinations(range(len(test_sentences)), r=2), start=1):
- angle = angles[i, j]
- print(f"The angle between '{test_sentences[i]}' and '{test_sentences[j]}' is {angle} degrees")
-
- # log the angle to tensorboard
- desc = f"'{test_sentences[i]}' : '{test_sentences[j]}'"
- tf.summary.scalar(f'similarity-angle/{idx}', angle, step=step, description=desc)
- summary_writer_dev.flush()
- model_file_name = checkpoint_manager.save()
-
- utils.log("Saving a frozen model")
- serve_model_ = ServingModel(embed_layer=model.embed_layer, encode_layer=model.encode_layer, sep=' ')
- tf.saved_model.save(
- obj=serve_model_,
- export_dir=f"{log_dir}/frozen/1",
- signatures={"serving_default": serve_model_.embed, "token_embed": serve_model_.token_embed}
- )
-
- utils.log("Reloading frozen model and comparing output to in-memory model")
- test = tf.saved_model.load(f"{log_dir}/frozen/1")
- test_model = test.signatures["serving_default"]
- test_output = test_model(tf.constant(test_tokens))["output_0"]
- utils.log(f"Outputs on CV set are approximately the same?: {np.allclose(test_output, model.embed(test_tokens))}")
- return model_file_name
-
-
-def main():
- """Training and inferencing entrypoint for CLI.
-
- Raises
- ------
- NotImplementedError
- Raised if a `run` mode other than `train` or `infer` are passed.
- """
-
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument("--run", choices=["train", "infer"], help="Run type.", required=True)
- parser.add_argument("--attention", action='store_true', help="Set to use attention transformer model.")
- parser.add_argument("--orthogonal", action='store_true', help="Set to add a cost to mutually parallel contexts.")
- parser.add_argument("--yaml_config", type=str, help="Path to a valid training config YAML file.", required=True)
- args = parser.parse_args()
-
- config_path = args.yaml_config
- if config_path.startswith("${HOME}"):
- config_path = config_path.replace('${HOME}', os.getenv('HOME'))
- elif config_path.startswith("$HOME"):
- config_path = config_path.replace('$HOME', os.getenv('HOME'))
-
- config = yaml.safe_load(open(config_path, 'r'))
- training_config = config.get("training", {})
- model_config = config.get("model", {})
- model_params = model_config.get("parameters", {})
-
- if args.run == "train":
- train(
- model_folder=model_config["name"],
- use_attention=args.attention,
- num_tokens=training_config.get("tokens", 10000),
- max_allowed_seq=training_config.get("max_sequence_length", 512),
- embedding_size=model_params.get("embedding", 128),
- num_hidden=model_params.get("hidden", 128),
- layers=model_params.get("layers", 8),
- batch_size=training_config.get("batch_size", 32),
- num_epochs=training_config.get("epochs", 20),
- data_files=training_config.get("data_files"),
- model_path=model_config.get("storage_dir", "."),
- eval_sentences=training_config.get("eval_sentences"),
- orthogonal_cost=args.orthogonal
- )
- elif args.run == "infer":
- os.environ["MODEL_PATH"] = f'{model_config.get("storage_dir", ".")}/{model_config["name"]}'
- from .text_summarize import run_server
- run_server(port=8008)
- else:
- raise NotImplementedError("Only training and inferencing is enabled right now.")
- return
-
-
-if __name__ == '__main__':
- main()
diff --git a/text2vec/bin/serving_tools.py b/text2vec/bin/serving_tools.py
deleted file mode 100644
index 8305c13..0000000
--- a/text2vec/bin/serving_tools.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import os
-
-from nltk.tokenize import sent_tokenize
-
-import tensorflow as tf
-from text2vec.preprocessing.text import normalize_text, clean_and_split
-
-
-class Embedder():
- """Wrapper class which handles contextual embedding of documents.
- """
-
- def __init__(self):
- log_dir = f"{os.environ['MODEL_PATH']}/frozen/1"
- self.__model = tf.saved_model.load(log_dir)
-
- @staticmethod
- def __get_sentences(text):
- data = [(sent, ' '.join(clean_and_split(normalize_text(sent)))) for sent in sent_tokenize(text)]
- data = [(orig, clean) for orig, clean in data if len(clean.split()) >= 5]
- original, clean = map(list, zip(*data))
- return original, clean
-
- def __normalize(self, vectors: tf.Tensor):
- return tf.math.l2_normalize(vectors, axis=-1).numpy()
-
- def __doc_vector(self, doc: tf.Tensor):
- net_vector = tf.reduce_sum(doc, axis=0)
- return self.__normalize(net_vector)
-
- def __embed(self, corpus: list):
- return self.__model.embed(corpus)
-
- def embed(self, text: str):
- """String preparation and embedding. Returns the context vector representing the input document.
-
- Parameters
- ----------
- text : str
-
- Returns
- -------
- (list, tf.Tensor, tf.Tensor)
- (
- Segmented sentences,
- L2-normalized context vectors (num_sentences, embedding_size),
- Single unit vector representing the entire document (embedding_size,)
- )
- """
-
- sentences, clean_sentences = self.__get_sentences(text)
- vectors = self.__embed(clean_sentences)
- return sentences, self.__normalize(vectors), self.__doc_vector(vectors)
diff --git a/text2vec/bin/text_summarize.py b/text2vec/bin/text_summarize.py
deleted file mode 100644
index b9a6428..0000000
--- a/text2vec/bin/text_summarize.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import time
-import json
-
-from flask import Flask, request, Response
-from flask_cors import cross_origin
-
-from tornado.httpserver import HTTPServer
-from tornado.wsgi import WSGIContainer
-from tornado.ioloop import IOLoop
-import tornado.autoreload
-import tornado
-
-import numpy as np
-from .serving_tools import Embedder
-
-app = Flask(__name__)
-model = Embedder()
-
-
-def responder(results, error, message):
- """Boilerplate Flask response item.
-
- Parameters
- ----------
- results : dict
- API response
- error : int
- Error code
- message : str
- Message to send to the client
-
- Returns
- -------
- flask.Reponse
- """
-
- assert isinstance(results, dict)
- results["message"] = message
- results = json.dumps(results, indent=2)
-
- return Response(
- response=results,
- status=error,
- mimetype="application/json"
- )
-
-
-def cosine_similarity_sort(net_vector, embedding_matrix):
- """
- Computes the cosine similarity scores and then returns
- the sorted results
-
- Parameters
- ----------
- net_vector : np.ndarray
- The context vector for the entire document
- embedding_matrix : np.ndarray
- The context vectors (row vectors) for each constituent body of text
-
- Returns
- -------
- (ndarray, ndarray)
- (sorted order of documents, cosine similarity scores)
- """
-
- similarity = np.dot(embedding_matrix, net_vector)
- similarity = np.clip(similarity, -1, 1)
- # sort = np.argsort(1 - similarity)
- sort = np.argsort(similarity - 1)
-
- return sort, similarity.flatten()[sort]
-
-
-def angle_from_cosine(cosine_similarity):
- """
- Computes the angles in degrees from cosine similarity scores
-
- Parameters
- ----------
- cosine_similarity : np.ndarray
-
- Returns
- -------
- ndarray
- Cosine angles (num_sentences,)
- """
-
- return np.arccos(cosine_similarity) * (180 / np.pi)
-
-
-def choose(sentences, scores, embeddings):
- """
- Selects the best constituent texts from the similarity scores
-
- Parameters
- ----------
- sentences : np.ndarray
- Array of the input texts, sorted by scores.
- scores : np.ndarray
- Cosine similarity scores, sorted
- embeddings : np.ndarray
- Embedding matrix for input texts, sorted by scores
-
- Returns
- -------
- (np.ndarray, np.ndarray, np.ndarray)
- (best sentences sorted, best scores sorted, best embeddings sorted)
- """
-
- if scores.shape[0] == 1:
- return sentences, scores, embeddings
-
- angles = angle_from_cosine(scores)
- cut = angles < angles.mean() - angles.std()
- return sentences[cut], scores[cut], embeddings[cut]
-
-
-def text_pass_filter(texts, texts_embeddings, net_vector):
- """
- Runs the scoring + filtering process on input texts
-
- Parameters
- ----------
- texts : np.ndarray
- Input texts.
- texts_embeddings : np.ndarray
- Context embedding matrix for input texts.
- net_vector : np.ndarray
- The context vector for the entire document
-
- Returns
- -------
- (np.ndarray, np.ndarray, np.ndarray)
- (best sentences sorted, best scores sorted, best embeddings sorted)
- """
-
- sorted_order, scores = cosine_similarity_sort(net_vector=net_vector, embedding_matrix=texts_embeddings)
- texts = np.array(texts)[sorted_order]
- filtered_texts, filtered_scores, filtered_embeddings = choose(
- sentences=texts,
- scores=scores,
- embeddings=texts_embeddings[sorted_order]
- )
-
- return filtered_texts, filtered_scores, filtered_embeddings
-
-
-def softmax(logits):
- """
- Computes the softmax of the input logits.
-
- Parameters
- ----------
- logits : np.ndarray
-
- Returns
- -------
- np.ndarray
- Softmax output array with the same shape as the input.
- """
-
- soft = np.exp(logits)
- soft[np.isinf(soft)] = 1e10
- soft /= np.sum(soft, axis=0)
- soft = np.clip(soft, 0.0, 1.0)
- return soft
-
-
-@app.route('/condense', methods=['POST', 'GET'])
-@cross_origin(origins=['*'], allow_headers=['Content-Type', 'Authorization'])
-def compute():
- """
- Main Flask handler function
-
- Returns
- -------
- flask.Response
- """
-
- j = request.get_json()
- if j is None:
- j = request.args
- if not j:
- j = request.form
-
- st = time.time()
- body = j.get("body", "")
- if not body:
- results = {
- "elapsed_time": time.time() - st,
- "data": None
- }
- return responder(results=results, error=400, message="No text provided")
-
- # get the embedding vectors for each sentence in the document
- sentences, vectors, doc_vector = model.embed(body)
- top_sentences, top_scores, _ = text_pass_filter(texts=sentences, texts_embeddings=vectors, net_vector=doc_vector)
-
- results = {
- "elapsed_time": time.time() - st,
- "data": [{
- "text": text,
- "relevanceScore": score
- } for text, score in zip(top_sentences, top_scores.astype(float))]
- }
- return responder(results=results, error=200, message="Success")
-
-
-def run_server(port=8008):
- """This initializes the Tornad WSGI server to allow robust request handling.
-
- Parameters
- ----------
- port : int, optional
- Port number to serve the app on, by default 8008
- """
-
- http_server = HTTPServer(WSGIContainer(app))
- http_server.listen(port)
-
- io_loop = IOLoop.instance()
- tornado.autoreload.start(check_time=500)
- print("Listening to port", port)
-
- try:
- io_loop.start()
- except KeyboardInterrupt:
- pass
-
-
-if __name__ == '__main__':
- run_server(port=8008)
diff --git a/text2vec/bin/utils.py b/text2vec/bin/utils.py
deleted file mode 100644
index 57ac1ea..0000000
--- a/text2vec/bin/utils.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import numpy as np
-
-
-def log(message, **kwargs):
- print(f"[INFO] {message}", flush=True, end=kwargs.get("end", "\n"))
-
-
-def compute_angles(vectors):
- """Computes the angles between vectors
-
- Parameters
- ----------
- vectors : np.ndarray
- (batch_size, embedding_size)
-
- Returns
- -------
- np.ndarray
- Cosine angles in degrees (batch_size, batch_size)
- """
-
- vectors /= np.linalg.norm(vectors, axis=1, keepdims=True)
- cosine = np.dot(vectors, vectors.T)
- cosine = np.clip(cosine, -1, 1)
- degrees = np.arccos(cosine) * (180 / np.pi)
- return degrees
diff --git a/text2vec/models/__init__.py b/text2vec/models/__init__.py
index c2c562f..74ed7f8 100644
--- a/text2vec/models/__init__.py
+++ b/text2vec/models/__init__.py
@@ -1,8 +1,7 @@
from text2vec.models.transformer import TransformerEncoder, TransformerDecoder
from text2vec.models.sequential import RecurrentEncoder, RecurrentDecoder
-from text2vec.models.components.feeder import TextInput, Tokenizer
-from text2vec.models.components.text_inputs import Embed, TokenEmbed
+from text2vec.models.components.text_inputs import Embed, TokenEmbed, Tokenizer
from text2vec.models.components.attention import BahdanauAttention, MultiHeadAttention
from text2vec.models.components.feed_forward import PositionWiseFFN
from text2vec.models.components.recurrent import BidirectionalLSTM
diff --git a/text2vec/models/components/attention.py b/text2vec/models/components/attention.py
index 07cbfa3..50400ec 100644
--- a/text2vec/models/components/attention.py
+++ b/text2vec/models/components/attention.py
@@ -1,7 +1,10 @@
+from typing import Optional
+
import tensorflow as tf
+from tensorflow.keras import layers, initializers
-class ScaledDotAttention(tf.keras.layers.Layer):
+class ScaledDotAttention(layers.Layer):
"""Scaled dot attention layer which computes
```
softmax(Query * permutedim(Key, (3, 1, 2)) / sqrt(dk)) * permutedim(Value, (2, 1, 3))
@@ -35,22 +38,23 @@ class ScaledDotAttention(tf.keras.layers.Layer):
def __init__(self):
super().__init__(name="ScaledDotAttention")
+ self.neg_inf = tf.constant(-1e9, dtype=tf.float32)
- def call(self, query, key, value, mask_future=False):
- with tf.name_scope("ScaledDotAttention"):
- numerator = tf.einsum('ijk,ilk->ijl', query, key)
- denominator = tf.sqrt(tf.cast(tf.shape(key)[-1], tf.float32))
+ # pylint: disable=missing-function-docstring
+ def call(self, query, key, value, mask_future: bool = False):
+ numerator = tf.matmul(query, key, transpose_b=True)
+ denominator = tf.sqrt(tf.cast(tf.shape(key)[-1], tf.float32))
- if mask_future:
- upper = (1 + 1e9) * tf.linalg.band_part(tf.ones_like(numerator), num_lower=0, num_upper=-1)
- mask = 1 - upper
- numerator *= mask
+ if mask_future:
+ upper = tf.linalg.band_part(tf.ones(tf.shape(numerator)[1:], dtype=tf.float32), num_lower=0, num_upper=-1)
+ diag = tf.linalg.band_part(upper, num_lower=0, num_upper=0)
+ numerator += (self.neg_inf * (upper - diag))
- x = tf.nn.softmax(numerator / denominator)
- return tf.einsum('ijk,ikl->ijl', x, value)
+ x = tf.nn.softmax(numerator / denominator)
+ return tf.matmul(x, value)
-class BahdanauAttention(tf.keras.layers.Layer):
+class BahdanauAttention(layers.Layer):
"""Layer which computes the Bahdanau attention mechanism either as a self-attention or as
a encoder-decoder attention.
@@ -62,6 +66,8 @@ class BahdanauAttention(tf.keras.layers.Layer):
----------
size : int
The dimensionality of the hidden attention weights. This is the same as the word-embedding dimensionality.
+ drop_rate : float, optional
+ Value between 0 and 1.0, performs dropout on the attention weights, by default 0.
Examples
--------
@@ -82,38 +88,31 @@ class BahdanauAttention(tf.keras.layers.Layer):
```
"""
- def __init__(self, size):
+ def __init__(self, size: int, drop_rate: float = 0.):
super().__init__(name="BahdanauAttention")
- initializer = tf.keras.initializers.GlorotUniform()
- self.W = tf.Variable(
- initializer(shape=(size, size)),
- name='weight',
- dtype=tf.float32,
- trainable=True
- )
- self.B = tf.Variable(tf.zeros(shape=[size]), name="B", dtype=tf.float32, trainable=True)
- self.U = tf.Variable(initializer(shape=[size]), name="U", dtype=tf.float32, trainable=True)
-
- def call(self, encoded, decoded=None):
- with tf.name_scope("BahdanauAttention"):
- if decoded is None:
- score = tf.math.tanh(tf.tensordot(encoded, self.W, axes=[-1, 0]) + self.B)
- score = tf.reduce_sum(self.U * score, axis=-1)
- alphas = tf.nn.softmax(score, name="attention-weights")
- # encoded = encoded * tf.expand_dims(alphas, axis=-1)
- # return encoded, tf.reduce_sum(encoded, axis=1, name="context-vector")
- return tf.einsum('ilk,il->ik', encoded, alphas)
-
- score = tf.einsum("ijm,mn,ikn->ijk", encoded, self.W, decoded)
- alphas = tf.reduce_mean(score, axis=1)
- alphas = tf.nn.softmax(alphas)
- # decoded = decoded * tf.expand_dims(alphas, axis=-1)
- # return decoded, tf.reduce_sum(decoded, axis=1)
- return tf.einsum('ilk,il->ik', decoded, alphas)
-
-
-class SingleHeadAttention(tf.keras.layers.Layer):
+ self.hidden = layers.Dense(units=size, activation="tanh")
+ self.U = tf.Variable(initializers.GlorotUniform()(shape=[size]), name="U", dtype=tf.float32, trainable=True)
+ self.dropout = layers.Dropout(drop_rate)
+
+ # pylint: disable=missing-function-docstring
+ def call(self, encoded: tf.Tensor, decoded: Optional[tf.Tensor] = None, training: bool = False) -> tf.Tensor:
+ if decoded is None:
+ score = tf.math.reduce_sum(self.U * self.hidden(encoded), axis=-1)
+ alphas = tf.nn.softmax(score)
+ alphas = self.dropout(alphas, training=training)
+ x = tf.expand_dims(alphas, axis=-1) * encoded
+ return x, tf.math.reduce_sum(x, axis=1)
+
+ score = tf.einsum("ijm,mn,ikn->ijk", encoded, self.hidden.kernel, decoded)
+ alphas = tf.nn.softmax(score, axis=1)
+ alphas = self.dropout(alphas, training=training)
+ alphas = tf.math.reduce_sum(tf.matmul(alphas, encoded, transpose_a=True), axis=-1)
+ x = tf.expand_dims(alphas, axis=-1) * decoded
+ return x, tf.math.reduce_sum(x, axis=1)
+
+
+class SingleHeadAttention(layers.Layer):
"""Layer which computes the single-head-attention mechanism as described in
https://arxiv.org/abs/1706.03762.
@@ -126,10 +125,10 @@ class SingleHeadAttention(tf.keras.layers.Layer):
----------
emb_dims : int
The word-embedding dimensionality. This value determines the dimensionalities of the hidden weights.
- layers : int, optional
+ num_layers : int, optional
The number of parallel single-head-attention mechanisms, by default 8.
- keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Examples
--------
@@ -149,35 +148,35 @@ class SingleHeadAttention(tf.keras.layers.Layer):
```
"""
- def __init__(self, emb_dims, layers=8, keep_prob=1.0):
+ def __init__(self, emb_dims, num_layers: int = 8, drop_rate: float = 0.):
super().__init__(name="SingleHeadAttention")
- assert isinstance(layers, int) and layers > 0
+ assert isinstance(num_layers, int) and num_layers > 0
dims = emb_dims
- key_dims = emb_dims // layers
+ key_dims = emb_dims // num_layers
initializer = tf.keras.initializers.GlorotUniform()
self.WQ = tf.Variable(initializer(shape=(dims, key_dims)), name="WQ", dtype=tf.float32, trainable=True)
self.WK = tf.Variable(initializer(shape=(dims, key_dims)), name="WK", dtype=tf.float32, trainable=True)
self.WV = tf.Variable(initializer(shape=(dims, key_dims)), name="WV", dtype=tf.float32, trainable=True)
- self.dropout = tf.keras.layers.Dropout(1 - keep_prob)
+ self.dropout = layers.Dropout(drop_rate)
self.dot_attention = ScaledDotAttention()
- def call(self, inputs, mask_future=False, training=False):
- with tf.name_scope("SingleHeadAttention"):
- queries, keys, values = inputs
+ # pylint: disable=missing-function-docstring
+ def call(self, inputs, mask_future: bool = False, training: bool = False):
+ queries, keys, values = inputs
- queries = self.dropout(queries, training=training)
- keys = self.dropout(keys, training=training)
- values = self.dropout(values, training=training)
+ queries = self.dropout(queries, training=training)
+ keys = self.dropout(keys, training=training)
+ values = self.dropout(values, training=training)
- head_queries = tf.tensordot(queries, self.WQ, axes=[-1, 0])
- head_keys = tf.tensordot(keys, self.WK, axes=[-1, 0])
- head_values = tf.tensordot(values, self.WV, axes=[-1, 0])
- return self.dot_attention(query=head_queries, key=head_keys, value=head_values, mask_future=mask_future)
+ head_queries = tf.tensordot(queries, self.WQ, axes=[-1, 0])
+ head_keys = tf.tensordot(keys, self.WK, axes=[-1, 0])
+ head_values = tf.tensordot(values, self.WV, axes=[-1, 0])
+ return self.dot_attention(query=head_queries, key=head_keys, value=head_values, mask_future=mask_future)
-class MultiHeadAttention(tf.keras.layers.Layer):
+class MultiHeadAttention(layers.Layer):
"""Layer which computes the multi-head-attention mechanism as described in
https://arxiv.org/abs/1706.03762.
@@ -190,10 +189,10 @@ class MultiHeadAttention(tf.keras.layers.Layer):
----------
emb_dims : int
The word-embedding dimensionality. This value determines the dimensionalities of the hidden weights.
- layers : int, optional
+ num_layers : int, optional
The number of parallel single-head-attention mechanisms, by default 8.
- keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Examples
--------
@@ -213,17 +212,16 @@ class MultiHeadAttention(tf.keras.layers.Layer):
```
"""
- def __init__(self, emb_dims, layers=8, keep_prob=1.0):
+ def __init__(self, emb_dims: int, num_layers: int = 8, drop_rate: float = 0.):
super().__init__(name="MultiHeadAttention")
- self.layer_heads = []
- for i in range(layers):
- with tf.name_scope(f"head-{i}"):
- self.layer_heads.append(SingleHeadAttention(emb_dims=emb_dims, layers=layers, keep_prob=keep_prob))
-
- self.dense = tf.keras.layers.Dense(units=emb_dims, use_bias=False)
+ self.layer_heads = [
+ SingleHeadAttention(emb_dims=emb_dims, num_layers=num_layers, drop_rate=drop_rate)
+ for _ in range(num_layers)
+ ]
+ self.dense = layers.Dense(units=emb_dims, use_bias=False)
+ # pylint: disable=missing-function-docstring
def call(self, inputs, mask_future=False, training=False):
- with tf.name_scope("MultiHeadAttention"):
- heads = [layer(inputs, mask_future=mask_future, training=training) for layer in self.layer_heads]
- total_head = tf.concat(heads, -1)
- return self.dense(total_head)
+ heads = [layer(inputs, mask_future=mask_future, training=training) for layer in self.layer_heads]
+ total_head = tf.concat(heads, -1)
+ return self.dense(total_head)
diff --git a/text2vec/models/components/feed_forward.py b/text2vec/models/components/feed_forward.py
index 012c829..a050d98 100644
--- a/text2vec/models/components/feed_forward.py
+++ b/text2vec/models/components/feed_forward.py
@@ -1,7 +1,8 @@
import tensorflow as tf
+from tensorflow.keras import layers
-class PositionWiseFFN(tf.keras.layers.Layer):
+class PositionWiseFFN(layers.Layer):
"""Position-wise feed-forward network implemented as conv -> relu -> conv.
1D convolutions of the input tensor are computed to an intermediate hidden dimension, then a final 1D
convolution is computed of the ReLu output from the intermediate layer to return to the original input shape.
@@ -25,18 +26,17 @@ class PositionWiseFFN(tf.keras.layers.Layer):
```
"""
- def __init__(self, emb_dims):
+ def __init__(self, emb_dims: int):
super().__init__()
- self.conv_inner = tf.keras.layers.Conv1D(
+ self.conv_inner = layers.Conv1D(
filters=4 * emb_dims,
kernel_size=1,
padding='same',
use_bias=False,
activation='relu'
)
- self.conv_outer = tf.keras.layers.Conv1D(filters=emb_dims, kernel_size=1, padding='same', use_bias=False)
+ self.conv_outer = layers.Conv1D(filters=emb_dims, kernel_size=1, padding='same', use_bias=False)
def call(self, x):
- with tf.name_scope("PositionWiseFFN"):
- return self.conv_outer(self.conv_inner(x))
+ return self.conv_outer(self.conv_inner(x))
diff --git a/text2vec/models/components/feeder.py b/text2vec/models/components/feeder.py
deleted file mode 100644
index b617ff6..0000000
--- a/text2vec/models/components/feeder.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import tensorflow as tf
-
-
-class Tokenizer(tf.keras.layers.Layer):
- """String-splitting layer.
-
- Parameters
- ----------
- sep : str, optional
- The token to split the incoming strings by, by default ' '.
-
- Examples
- --------
- ```python
- import tensorflow as tf
- from text2vec.models import Tokenizer
-
- text = tf.constant([
- "Sample string.",
- "This is a second example."
- ])
- tokenizer = Tokenizer()
- tokenizer(text)
- ```
- """
-
- def __init__(self, sep=' '):
- super().__init__(name="Tokenizer")
- self.sep = sep
-
- def call(self, corpus):
- return tf.strings.split(corpus, self.sep)
-
-
-class TextInput(tf.keras.layers.Layer):
- """This layer handles the primary text feature transformations and word-embeddings to be passed off
- to the sequence-aware parts of the encoder/decoder pipeline.
-
- Texts come in already tokenized. The tokens are transformed to integer index values from a
- `tf.lookup.StaticHashTable` lookup table. The tokens are used to lookup word-embeddings and a sequence
- mask is computed.
-
- The inputs are `tf.RaggedTensor` types, and after word-embeddings the tensor is made dense by padding to the
- longest sequence length in the batch.
-
- In certain cases, only the word-embedding output is necessary, in which case `output_embeddings` can be set `True`
- in the `__call__` method. This by-passes the padding and sequence masking steps.
-
- Parameters
- ----------
- token_hash : dict
- Token -> integer vocabulary lookup.
- embedding_size : int
- Dimensionality of the word-embeddings.
- max_sequence_len : int
- Longest sequence seen at training time. This layer ensures that no input sequences exceed this length.
-
- Examples
- --------
- ```python
- import tensorflow as tf
- from text2vec.models import TextInput
-
- lookup = {'string': 0, 'is': 1, 'example': 2}
- inputer = TextInput(token_hash=lookup, embedding_size=16, max_sequence_len=10)
-
- text = tf.ragged.constant([
- ["Sample", "string", "."],
- ["This", "is", "a", "second", "example", "."]
- ])
- sequences, seq_mask, time_steps = inputer(text)
-
- # get word-embeddings only
- word_embeddings = inputer(text, output_embeddings=True)
- ```
- """
-
- def __init__(self, token_hash, embedding_size, max_sequence_len):
- super().__init__()
- assert isinstance(token_hash, dict)
-
- self.num_labels = tf.constant(len(token_hash) + 1)
- self.table = tf.lookup.StaticHashTable(
- tf.lookup.KeyValueTensorInitializer(
- keys=list(token_hash.keys()),
- values=list(token_hash.values()),
- value_dtype=tf.int32
- ),
- default_value=max(token_hash.values()) + 1
- )
- self.embeddings = tf.Variable(
- tf.random.uniform([self.num_labels, embedding_size], -1.0, 1.0),
- name='embeddings',
- dtype=tf.float32,
- trainable=True
- )
- self.max_len = tf.constant(max_sequence_len)
- self.slicer = tf.keras.layers.Lambda(lambda x: x[:, :max_sequence_len], name="sequence-slice")
-
- def call(self, tokens, output_embeddings=False):
- with tf.name_scope("TextInput"):
- hashed = tf.ragged.map_flat_values(self.table.lookup, tokens)
- hashed = self.slicer(hashed)
-
- x = tf.ragged.map_flat_values(tf.nn.embedding_lookup, self.embeddings, hashed)
- if output_embeddings:
- return x
-
- x = x.to_tensor(0)
- x = x * tf.math.sqrt(tf.cast(tf.shape(self.embeddings)[-1], tf.float32)) # sqrt(embedding_size)
-
- seq_lengths = hashed.row_lengths()
- time_steps = tf.cast(tf.reduce_max(seq_lengths), tf.int32)
- mask = tf.sequence_mask(lengths=seq_lengths, maxlen=time_steps, dtype=tf.float32)
- return x, mask, time_steps
diff --git a/text2vec/models/components/recurrent.py b/text2vec/models/components/recurrent.py
index be7d272..b2c18ed 100644
--- a/text2vec/models/components/recurrent.py
+++ b/text2vec/models/components/recurrent.py
@@ -1,7 +1,8 @@
import tensorflow as tf
+from tensorflow.keras import layers
-class BidirectionalLSTM(tf.keras.layers.Layer):
+class BidirectionalLSTM(layers.Layer):
"""Bi-directional LSTM with the option to warm initialize with previous states.
Parameters
@@ -30,11 +31,11 @@ class BidirectionalLSTM(tf.keras.layers.Layer):
```
"""
- def __init__(self, num_layers=2, num_hidden=32, return_states=False):
+ def __init__(self, num_layers: int = 2, num_hidden: int = 32, return_states: bool = False):
super().__init__()
self.num_layers = num_layers
self.return_states = return_states
- lstm = tf.keras.layers.LSTM
+ lstm = layers.LSTM
params = dict(
units=num_hidden,
@@ -45,7 +46,7 @@ def __init__(self, num_layers=2, num_hidden=32, return_states=False):
self.FWD = [lstm(**params, name=f"forward-{i}") for i in range(num_layers)]
self.BWD = [lstm(**params, name=f"backward-{i}", go_backwards=True) for i in range(num_layers)]
- self.concat = tf.keras.layers.Concatenate()
+ self.concat = layers.Concatenate()
@staticmethod
def __make_inputs(inputs, initial_states=None, layer=0):
@@ -59,19 +60,18 @@ def __make_inputs(inputs, initial_states=None, layer=0):
return fwd_inputs, bwd_inputs
def call(self, inputs, initial_states=None, training=False):
- with tf.name_scope("BidirectionalLSTM"):
- layer = 0
- for forward, backward in zip(self.FWD, self.BWD):
- fwd_inputs, bwd_inputs = self.__make_inputs(inputs, initial_states=initial_states, layer=layer)
+ layer = 0
+ for forward, backward in zip(self.FWD, self.BWD):
+ fwd_inputs, bwd_inputs = self.__make_inputs(inputs, initial_states=initial_states, layer=layer)
- if self.return_states:
- decode_forward, *forward_state = forward(**fwd_inputs, training=training)
- decode_backward, *backward_state = backward(**bwd_inputs, training=training)
- else:
- decode_forward = forward(**fwd_inputs, training=training)
- decode_backward = backward(**bwd_inputs, training=training)
- inputs = self.concat([decode_forward, decode_backward])
- layer += 1
if self.return_states:
- return inputs, [forward_state, backward_state]
- return inputs
+ decode_forward, *forward_state = forward(**fwd_inputs, training=training)
+ decode_backward, *backward_state = backward(**bwd_inputs, training=training)
+ else:
+ decode_forward = forward(**fwd_inputs, training=training)
+ decode_backward = backward(**bwd_inputs, training=training)
+ inputs = self.concat([decode_forward, decode_backward])
+ layer += 1
+ if self.return_states:
+ return inputs, [forward_state, backward_state]
+ return inputs
diff --git a/text2vec/models/components/strings.py b/text2vec/models/components/strings.py
index 573628e..b8781f8 100644
--- a/text2vec/models/components/strings.py
+++ b/text2vec/models/components/strings.py
@@ -1,9 +1,10 @@
import tensorflow as tf
+from tensorflow.kersa import layers
from text2vec.models import Tokenizer
-class SubStringFinderMask(tf.keras.layers.Layer):
+class SubStringFinderMask(layers.Layer):
"""Performs substring masking based on whether the substring is found in the input text
in its entirety. This returns a ragged boolean tensor with the same ragged shape as input substrings.
@@ -35,7 +36,7 @@ class SubStringFinderMask(tf.keras.layers.Layer):
def __init__(self, sep: str = ' '):
super().__init__()
self.tokenizer = Tokenizer(sep)
- self.match = tf.keras.layers.Lambda(lambda x: tf.strings.regex_full_match(input=x[0], pattern=x[1]))
+ self.match = layers.Lambda(lambda x: tf.strings.regex_full_match(input=x[0], pattern=x[1]))
# this is designed to approximate the functionality in re.escape
self.special_chars = r'[\(\)\[\]\{\}\?\*\+\-\|\^\$\\\\\.\&\~\#\\\t\\\n\\\r\\\v\\\f]'
diff --git a/text2vec/models/components/text_inputs.py b/text2vec/models/components/text_inputs.py
index 062aea9..26a1beb 100644
--- a/text2vec/models/components/text_inputs.py
+++ b/text2vec/models/components/text_inputs.py
@@ -1,7 +1,39 @@
import tensorflow as tf
+from tensorflow.keras import layers
-class Embed(tf.keras.layers.Layer):
+class Tokenizer(layers.Layer):
+ """String-splitting layer.
+
+ Parameters
+ ----------
+ sep : str, optional
+ The token to split the incoming strings by, by default ' '.
+
+ Examples
+ --------
+ ```python
+ import tensorflow as tf
+ from text2vec.models import Tokenizer
+
+ text = tf.constant([
+ "Sample string.",
+ "This is a second example."
+ ])
+ tokenizer = Tokenizer()
+ tokenizer(text)
+ ```
+ """
+
+ def __init__(self, sep: str = ' '):
+ super().__init__(name="Tokenizer")
+ self.sep = sep
+
+ def call(self, corpus):
+ return tf.strings.split(corpus, self.sep)
+
+
+class Embed(layers.Layer):
"""This layer handles the primary text feature transformations and word-embeddings to be passed off
to the sequence-aware parts of the encoder/decoder pipeline.
@@ -47,20 +79,22 @@ def __init__(self, vocab_size: int, embedding_size: int, max_sequence_len: int):
dtype=tf.float32,
trainable=True
)
+ self.sqrt_d = tf.math.sqrt(tf.cast(embedding_size, tf.float32))
self.max_len = tf.constant(max_sequence_len)
- self.slicer = tf.keras.layers.Lambda(lambda x: x[:, :max_sequence_len], name="sequence-slice")
+ self.slicer = layers.Lambda(lambda x: x[:, :max_sequence_len], name="sequence-slice")
def call(self, token_ids, **kwargs):
- with tf.name_scope("TokenIds"):
- token_ids = self.slicer(token_ids)
- x = tf.ragged.map_flat_values(tf.nn.embedding_lookup, self.embeddings, token_ids)
- x = x.to_tensor(0)
- x = x * tf.math.sqrt(tf.cast(tf.shape(self.embeddings)[-1], tf.float32)) # sqrt(embedding_size)
-
- seq_lengths = token_ids.row_lengths()
- time_steps = tf.cast(tf.reduce_max(seq_lengths), tf.int32)
- mask = tf.sequence_mask(lengths=seq_lengths, maxlen=time_steps, dtype=tf.float32)
- return x, mask, time_steps
+ token_ids = self.slicer(token_ids)
+ x = tf.ragged.map_flat_values(tf.nn.embedding_lookup, self.embeddings, token_ids)
+ x * self.sqrt_d
+ x = x.to_tensor(0)
+
+ mask = tf.sequence_mask(
+ lengths=token_ids.row_lengths(),
+ maxlen=token_ids.bounding_shape()[-1],
+ dtype=tf.float32
+ )
+ return x, mask, token_ids.row_lengths()
def get_embedding(self, token_ids: tf.RaggedTensor) -> tf.RaggedTensor:
"""Get the token embeddings for the input IDs.
@@ -76,8 +110,7 @@ def get_embedding(self, token_ids: tf.RaggedTensor) -> tf.RaggedTensor:
Sequences of token embeddings with the same number of time steps as `token_ids`
"""
- with tf.name_scope("TokenEmbeddings"):
- return tf.ragged.map_flat_values(tf.nn.embedding_lookup, self.embeddings, token_ids)
+ return tf.ragged.map_flat_values(tf.nn.embedding_lookup, self.embeddings, token_ids)
class TokenEmbed(tf.keras.layers.Layer):
diff --git a/text2vec/models/components/utils.py b/text2vec/models/components/utils.py
index dd79ee6..77e5165 100644
--- a/text2vec/models/components/utils.py
+++ b/text2vec/models/components/utils.py
@@ -1,8 +1,9 @@
import numpy as np
import tensorflow as tf
+from tensorflow.keras import layers, initializers
-class LayerNorm(tf.keras.layers.Layer):
+class LayerNorm(layers.Layer):
"""Layer normalization, independent of batch size.
Parameters
@@ -26,21 +27,20 @@ class LayerNorm(tf.keras.layers.Layer):
```
"""
- def __init__(self, epsilon=1e-8, scale=1.0, bias=0):
+ def __init__(self, epsilon: float = 1e-8, scale: float = 1.0, bias: float = 0):
super().__init__(name="LayerNorm")
self.epsilon = tf.constant(epsilon, dtype=tf.float32)
self.scale = tf.constant(scale, dtype=tf.float32)
self.bias = tf.constant(bias, dtype=tf.float32)
def call(self, x):
- with tf.name_scope("LayerNorm"):
- mean = tf.reduce_mean(x, axis=-1, keepdims=True)
- variance = tf.reduce_mean(tf.square(x - mean), axis=-1, keepdims=True)
- norm = (x - mean) * tf.math.rsqrt(variance + self.epsilon)
- return norm * self.scale + self.bias
+ mean = tf.reduce_mean(x, axis=-1, keepdims=True)
+ variance = tf.reduce_mean(tf.square(x - mean), axis=-1, keepdims=True)
+ norm = (x - mean) * tf.math.rsqrt(variance + self.epsilon)
+ return norm * self.scale + self.bias
-class TensorProjection(tf.keras.layers.Layer):
+class TensorProjection(layers.Layer):
"""Projects sequence vectors onto a fixed vector. This returns a new tensor with the same shape as the
input tensor, with all sequence vectors projected.
@@ -63,17 +63,12 @@ def __init__(self):
super().__init__(name="TensorProjection")
def call(self, x, projection_vector):
- with tf.name_scope("TensorProjection"):
- inner_product = tf.einsum("ijk,ik->ij", x, projection_vector)
- time_steps = tf.shape(x)[1]
- p_vector_norm_squared = tf.norm(projection_vector, axis=1) ** 2
- p_vector_norm_squared = tf.tile(tf.expand_dims(p_vector_norm_squared, -1), [1, time_steps])
+ projection_vector = tf.math.l2_normalize(projection_vector, axis=-1)
+ inner_product = tf.einsum("ijk,ik->ij", x, projection_vector)
+ return tf.einsum("ij,ik->ijk", inner_product, projection_vector)
- alpha = tf.divide(inner_product, p_vector_norm_squared)
- return tf.einsum("ij,ik->ijk", alpha, projection_vector)
-
-class PositionalEncoder(tf.keras.layers.Layer):
+class PositionalEncoder(layers.Layer):
"""Layer which initializes the positional encoding tensor, and defines the operation which adds the encoding
to an input tensor and then applies a sequence mask.
@@ -88,11 +83,11 @@ class PositionalEncoder(tf.keras.layers.Layer):
--------
```python
import tensorflow as tf
- from text2vec.models import TextInput
+ from text2vec.models import TokenEmbed
from text2vec.models import utils
lookup = {'string': 0, 'is': 1, 'example': 2}
- inputer = TextInput(token_hash=lookup, embedding_size=16, max_sequence_len=10)
+ inputer = TokenEmbed(token_hash=lookup, embedding_size=16, max_sequence_len=10)
encoder = utils.PositionalEncoder(emb_dims=16, max_sequence_len=10)
text = tf.ragged.constant([
@@ -104,7 +99,7 @@ class PositionalEncoder(tf.keras.layers.Layer):
```
"""
- def __init__(self, emb_dims, max_sequence_len):
+ def __init__(self, emb_dims: int, max_sequence_len: int):
super().__init__()
positions = np.arange(max_sequence_len).astype(np.float32)
@@ -119,7 +114,52 @@ def __init__(self, emb_dims, max_sequence_len):
encoder[:, 1::2] = odd
self.encoder = tf.convert_to_tensor(encoder, dtype=tf.float32)
- def call(self, x, mask):
- with tf.name_scope('PositionalEncoder'):
- time_steps = tf.shape(x)[1]
- return tf.einsum('ijk,ij->ijk', x + self.encoder[:time_steps, :], mask)
+ def call(self, x: tf.Tensor, mask: tf.Tensor):
+ time_steps = tf.shape(x)[1]
+ return tf.expand_dims(mask, axis=-1) * (x + self.encoder[:time_steps, ...])
+
+
+class VariationPositionalEncoder(layers.Layer):
+ """Learns the relative phases between sequence steps in an attention-based transformer, where there is no
+ inherent sequential ordering.
+
+ Parameters
+ ----------
+ emb_dims : int
+ The word-embedding dimensionality. This value determines the dimensionalities of the hidden weights.
+ max_sequence_len : int
+ Longest sequence seen at training time.
+
+ Examples
+ --------
+ ```python
+ import tensorflow as tf
+ from text2vec.models import TokenEmbed
+ from text2vec.models import utils
+
+ lookup = {'string': 0, 'is': 1, 'example': 2}
+ inputer = TokenEmbed(token_hash=lookup, embedding_size=16, max_sequence_len=10)
+ encoder = utils.VariationPositionalEncoder(emb_dims=16, max_sequence_len=10)
+
+ text = tf.ragged.constant([
+ ["Sample", "string", "."],
+ ["This", "is", "a", "second", "example", "."]
+ ])
+ x, mask, _ = inputer(text)
+ encoder(x, mask)
+ ```
+ """
+
+ def __init__(self, emb_dims: int, max_sequence_len: int):
+ super().__init__()
+
+ self.encoder = tf.Variable(
+ initializers.GlorotUniform()(shape=(max_sequence_len, emb_dims)),
+ dtype=tf.float32,
+ trainable=True,
+ name="positional-encoder"
+ )
+
+ def call(self, x: tf.Tensor, mask: tf.Tensor):
+ time_steps = tf.shape(x)[1]
+ return tf.expand_dims(mask, axis=-1) * (x + self.encoder[:time_steps, ...])
diff --git a/text2vec/models/sequential.py b/text2vec/models/sequential.py
index c740cf5..b20cfff 100644
--- a/text2vec/models/sequential.py
+++ b/text2vec/models/sequential.py
@@ -1,11 +1,12 @@
import tensorflow as tf
+from tensorflow.keras import layers
from .components.attention import BahdanauAttention
from .components.recurrent import BidirectionalLSTM
from .components.utils import TensorProjection
-class RecurrentEncoder(tf.keras.layers.Layer):
+class RecurrentEncoder(layers.Layer):
"""LSTM based encoding pipeline.
Parameters
@@ -16,18 +17,18 @@ class RecurrentEncoder(tf.keras.layers.Layer):
Dimensionality of hidden LSTM layer weights.
num_layers : int, optional
Number of hidden LSTM layers, by default 2
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Examples
--------
```python
import tensorflow as tf
- from text2vec.models import TextInputs
+ from text2vec.models import TokenEmbed
from text2vec.models import RecurrentEncoder
lookup = {'string': 0, 'is': 1, 'example': 2}
- inputer = TextInput(token_hash=lookup, embedding_size=16, max_sequence_len=10)
+ inputer = TokenEmbed(token_hash=lookup, embedding_size=16, max_sequence_len=10)
encoder = RecurrentEncoder(max_sequence_len=10, num_hidden=8, input_keep_prob=0.75)
text = tf.ragged.constant([
@@ -39,27 +40,29 @@ class RecurrentEncoder(tf.keras.layers.Layer):
```
"""
- def __init__(self, max_sequence_len, num_hidden, num_layers=2, input_keep_prob=1.0, **kwargs):
+ def __init__(self, max_sequence_len, num_hidden, num_layers=2,
+ input_drop_rate: float = 0., hidden_drop_rate: float = 0., **kwargs):
super().__init__()
self.max_sequence_length = max_sequence_len
- self.drop = tf.keras.layers.Dropout(1 - input_keep_prob, name="InputDropout")
+ self.drop = layers.Dropout(input_drop_rate)
self.bi_lstm = BidirectionalLSTM(num_layers=num_layers, num_hidden=num_hidden, return_states=True)
- self.attention = BahdanauAttention(size=2 * num_hidden)
+ self.attention = BahdanauAttention(size=2 * num_hidden, drop_rate=hidden_drop_rate)
- def call(self, x, mask, training=False, **kwargs):
+ # pylint: disable=missing-function-docstring
+ def call(self, x, mask, training: bool = False):
with tf.name_scope("RecurrentEncoder"):
mask = tf.expand_dims(mask, axis=-1)
x = self.drop(x, training=training)
x, states = self.bi_lstm(x * mask, training=training)
- context = self.attention(x * mask)
+ x, context = self.attention(x * mask)
if training:
return x, context, states
return x, context
-class RecurrentDecoder(tf.keras.layers.Layer):
+class RecurrentDecoder(layers.Layer):
"""LSTM based decoding pipeline.
Parameters
@@ -72,36 +75,34 @@ class RecurrentDecoder(tf.keras.layers.Layer):
Dimensionality of the word-embeddings, by default 50.
num_layers : int, optional
Number of hidden LSTM layers, by default 2
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
- hidden_keep_prob : float, optional
- Hidden states dropout. Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
+ hidden_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
"""
def __init__(self, max_sequence_len, num_hidden, embedding_size=50, num_layers=2,
- input_keep_prob=1.0, hidden_keep_prob=1.0):
+ input_drop_rate: float = 0., hidden_drop_rate: float = 0.):
super().__init__()
self.max_sequence_length = max_sequence_len
dims = embedding_size
- self.drop = tf.keras.layers.Dropout(1 - input_keep_prob, name="InputDropout")
- self.h_drop = tf.keras.layers.Dropout(1 - hidden_keep_prob, name="HiddenStateDropout")
+ self.drop = layers.Dropout(input_drop_rate)
+ self.h_drop = layers.Dropout(hidden_drop_rate)
self.projection = TensorProjection()
self.bi_lstm = BidirectionalLSTM(num_layers=num_layers, num_hidden=num_hidden, return_states=False)
- self.dense = tf.keras.layers.Dense(units=dims, activation=tf.nn.relu)
-
- def call(self, x_enc, enc_mask, x_dec, dec_mask, context, training=False, **kwargs):
- with tf.name_scope("RecurrentDecoder"):
- enc_mask = tf.expand_dims(enc_mask, axis=-1)
- dec_mask = tf.expand_dims(dec_mask, axis=-1)
-
- initial_state = kwargs.get("initial_state")
- x = self.drop(x_dec * dec_mask, training=training)
- if initial_state is not None:
- x = self.bi_lstm(x * dec_mask, initial_states=initial_state[0], training=training)
- else:
- x = self.bi_lstm(x * dec_mask, training=training)
- x = self.h_drop(self.projection(x, projection_vector=context), training=training)
- x = self.dense(x * dec_mask)
- return x
+ self.dense = layers.Dense(units=dims, activation=tf.nn.relu)
+
+ # pylint: disable=missing-function-docstring
+ def call(self, x_enc, x_dec, dec_mask, context, initial_state=None, training: bool = False):
+ dec_mask = tf.expand_dims(dec_mask, axis=-1)
+
+ x = self.drop(x_dec * dec_mask, training=training)
+ if initial_state is not None:
+ x = self.bi_lstm(x * dec_mask, initial_states=initial_state[0], training=training)
+ else:
+ x = self.bi_lstm(x * dec_mask, training=training)
+ x = self.h_drop(self.projection(x, projection_vector=context), training=training)
+ x = self.dense(x * dec_mask)
+ return x
diff --git a/text2vec/models/transformer.py b/text2vec/models/transformer.py
index 1643240..8743eab 100644
--- a/text2vec/models/transformer.py
+++ b/text2vec/models/transformer.py
@@ -1,40 +1,37 @@
-import tensorflow as tf
+from tensorflow.keras import layers
-from .components.attention import BahdanauAttention
-from .components.attention import MultiHeadAttention
+from .components.attention import BahdanauAttention, MultiHeadAttention
from .components.feed_forward import PositionWiseFFN
-from .components.utils import PositionalEncoder
-from .components.utils import LayerNorm
-from .components.utils import TensorProjection
+from .components.utils import VariationPositionalEncoder, LayerNorm, TensorProjection
-class TransformerEncoder(tf.keras.layers.Layer):
+class TransformerEncoder(layers.Layer):
"""Attention based encoding pipeline.
Parameters
----------
max_sequence_len : int
Longest sequence seen at training time.
- layers : int, optional
+ num_layers : int, optional
Number of layers in the multi-head-attention layer, by default 8
n_stacks : int, optional
Number of encoding blocks to chain, by default 1
embedding_size : int, optional
Dimensionality of the word-embeddings, by default 50.
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
- hidden_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
+ hidden_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
Examples
--------
```python
import tensorflow as tf
- from text2vec.models import TextInputs
+ from text2vec.models import TokenEmbed
from text2vec.models import TransformerEncoder
lookup = {'string': 0, 'is': 1, 'example': 2}
- inputer = TextInput(token_hash=lookup, embedding_size=16, max_sequence_len=10)
+ inputer = TokenEmbed(token_hash=lookup, embedding_size=16, max_sequence_len=10)
encoder = TransformerEncoder(max_sequence_len=10, embedding_size=16, input_keep_prob=0.75)
text = tf.ragged.constant([
@@ -46,91 +43,92 @@ class TransformerEncoder(tf.keras.layers.Layer):
```
"""
- def __init__(self, max_sequence_len, layers=8, n_stacks=1, embedding_size=50,
- input_keep_prob=1.0, hidden_keep_prob=1.0):
+ def __init__(self, max_sequence_len, num_layers=8, n_stacks=1, embedding_size=50,
+ input_drop_rate: float = 0., hidden_drop_rate: float = 0.):
super().__init__()
dims = embedding_size
- keep_prob = hidden_keep_prob
- self.drop = tf.keras.layers.Dropout(1 - input_keep_prob, name="InputDropout")
- self.h_drop = tf.keras.layers.Dropout(1 - hidden_keep_prob, name="HiddenStateDropout")
+ self.positional_encode = VariationPositionalEncoder(emb_dims=dims, max_sequence_len=max_sequence_len)
self.layer_norm = LayerNorm()
-
- self.positional_encode = PositionalEncoder(emb_dims=dims, max_sequence_len=max_sequence_len)
- self.MHA = [MultiHeadAttention(emb_dims=dims, layers=layers, keep_prob=keep_prob) for _ in range(n_stacks)]
+ self.MHA = [
+ MultiHeadAttention(emb_dims=dims, num_layers=num_layers, drop_rate=input_drop_rate)
+ for _ in range(n_stacks)
+ ]
self.FFN = [PositionWiseFFN(emb_dims=dims) for _ in range(n_stacks)]
- self.attention = BahdanauAttention(size=dims)
+ self.attention = BahdanauAttention(size=dims, drop_rate=hidden_drop_rate)
+
+ self.drop = layers.Dropout(input_drop_rate)
+ self.h_drop = layers.Dropout(hidden_drop_rate)
- def call(self, x, mask, training=False):
- with tf.name_scope("TransformerEncoder"):
- x = self.positional_encode(x, mask)
- x = self.drop(x, training=training)
- # mask = tf.expand_dims(mask, axis=-1)
+ # pylint: disable=missing-function-docstring
+ def call(self, x, mask, training: bool = False):
+ x = self.positional_encode(x, mask)
+ x = self.drop(x, training=training)
- for mha, ffn in zip(self.MHA, self.FFN):
- x = self.h_drop(mha([x] * 3, training=training), training=training) + x
- x = self.layer_norm(x)
- x = self.h_drop(ffn(x), training=training) + x
- x = self.layer_norm(x)
+ for mha, ffn in zip(self.MHA, self.FFN):
+ x = self.h_drop(mha([x] * 3, training=training), training=training) + x
+ x = self.layer_norm(x)
+ x = self.h_drop(ffn(x), training=training) + x
+ x = self.layer_norm(x)
- context = self.attention(x)
- return x, context
+ x, context = self.attention(x)
+ return x, context
-class TransformerDecoder(tf.keras.layers.Layer):
+class TransformerDecoder(layers.Layer):
"""Attention based decoding pipeline.
Parameters
----------
max_sequence_len : int
Longest sequence seen at training time.
- layers : int, optional
+ num_layers : int, optional
Number of layers in the multi-head-attention layer, by default 8
n_stacks : int, optional
Number of encoding blocks to chain, by default 1
embedding_size : int, optional
Dimensionality of the word-embeddings, by default 50.
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
- hidden_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0.
+ input_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
+ hidden_drop_rate : float, optional
+ Value between 0 and 1.0, by default 0.
"""
- def __init__(self, max_sequence_len, layers=8, n_stacks=1, embedding_size=50,
- input_keep_prob=1.0, hidden_keep_prob=1.0):
+ def __init__(self, max_sequence_len, num_layers=8, n_stacks=1, embedding_size=50,
+ input_drop_rate: float = 0., hidden_drop_rate: float = 0.):
super().__init__()
dims = embedding_size
- keep_prob = hidden_keep_prob
- self.drop = tf.keras.layers.Dropout(1 - input_keep_prob, name="InputDropout")
- self.h_drop = tf.keras.layers.Dropout(1 - hidden_keep_prob, name="HiddenStateDropout")
self.layer_norm = LayerNorm()
self.projection = TensorProjection()
-
- self.positional_encode = PositionalEncoder(emb_dims=dims, max_sequence_len=max_sequence_len)
- self.MHA = [MultiHeadAttention(emb_dims=dims, layers=layers, keep_prob=keep_prob) for _ in range(n_stacks)]
+ self.positional_encode = VariationPositionalEncoder(emb_dims=dims, max_sequence_len=max_sequence_len)
+ self.MHA = [
+ MultiHeadAttention(emb_dims=dims, num_layers=num_layers, drop_rate=input_drop_rate)
+ for _ in range(n_stacks)
+ ]
self.FFN = [PositionWiseFFN(emb_dims=dims) for _ in range(n_stacks)]
- def call(self, x_enc, enc_mask, x_dec, dec_mask, context, attention, training=False, **kwargs):
- with tf.name_scope("TransformerDecoder"):
- x_dec = self.positional_encode(x_dec, dec_mask)
- x_dec = self.drop(x_dec, training=training)
- # enc_mask = tf.expand_dims(enc_mask, axis=-1)
- # dec_mask = tf.expand_dims(dec_mask, axis=-1)
-
- for mha, ffn in zip(self.MHA, self.FFN):
- x_dec = self.h_drop(mha(
- [x_dec] * 3,
- mask_future=True,
- training=training
- ), training=training) + x_dec
- x_dec = self.layer_norm(x_dec)
-
- cross_context = attention(encoded=x_enc, decoded=x_dec)
- x_dec = self.h_drop(self.projection(x_dec, projection_vector=cross_context), training=training) + x_dec
-
- x_dec = self.layer_norm(x_dec)
- x_dec = self.h_drop(ffn(x_dec), training=training) + x_dec
- x_dec = self.layer_norm(x_dec)
- x_dec = self.h_drop(self.projection(x_dec, projection_vector=context), training=training) + x_dec
- return x_dec
+ self.drop = layers.Dropout(input_drop_rate)
+ self.h_drop = layers.Dropout(hidden_drop_rate)
+
+ # pylint: disable=missing-function-docstring
+ def call(self, x_enc, x_dec, dec_mask, context, attention: BahdanauAttention, training: bool = False):
+ x_dec = self.positional_encode(x_dec, dec_mask)
+ x_dec = self.drop(x_dec, training=training)
+
+ for mha, ffn in zip(self.MHA, self.FFN):
+ x_dec = self.h_drop(mha(
+ [x_dec] * 3,
+ mask_future=True,
+ training=training
+ ), training=training) + x_dec
+ x_dec = self.layer_norm(x_dec)
+
+ x_dec, cross_context = attention(encoded=x_enc, decoded=x_dec)
+ x_dec = self.h_drop(self.projection(x_dec, projection_vector=cross_context), training=training) + x_dec
+
+ x_dec = self.layer_norm(x_dec)
+ x_dec = self.h_drop(ffn(x_dec), training=training) + x_dec
+ x_dec = self.layer_norm(x_dec)
+ x_dec = self.h_drop(self.projection(x_dec, projection_vector=context), training=training) + x_dec
+ return x_dec
diff --git a/text2vec/training_tools.py b/text2vec/training_tools.py
deleted file mode 100644
index 795abde..0000000
--- a/text2vec/training_tools.py
+++ /dev/null
@@ -1,291 +0,0 @@
-from typing import Dict, Union
-import tensorflow as tf
-
-from text2vec.models import TextInput
-from text2vec.models import Tokenizer
-from text2vec.models import TokenEmbed
-from text2vec.models import Embed
-from text2vec.models import TransformerEncoder
-from text2vec.models import TransformerDecoder
-from text2vec.models import RecurrentEncoder
-from text2vec.models import RecurrentDecoder
-
-
-class EncodingModel(tf.keras.Model):
- """Wrapper model class to combine the encoder-decoder training pipeline.
-
- Parameters
- ----------
- token_hash : dict
- Token -> integer vocabulary lookup.
- max_sequence_len : int
- Longest sequence seen at training time.
- n_stacks : int, optional
- Number of encoding blocks to chain, by default 1
- layers : int, optional
- Number of layers in the multi-head-attention layer, by default 8
- num_hidden : int, optional
- Dimensionality of hidden LSTM layer weights, by default 64
- input_keep_prob : float, optional
- Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
- hidden_keep_prob : float, optional
- Hidden states dropout. Value between 0 and 1.0 which determines `1 - dropout_rate`, by default 1.0
- embedding_size : int, optional
- Dimensionality of the word-embeddings, by default 64
- recurrent : bool, optional
- Set to True to use the LSTM based model, otherwise defaults to attention based model, by default False
- sep : str, optional
- Token separator, by default ' '
-
- Examples
- --------
- ```python
- import tensorflow as tf
- from text2vec.training_tools import EncodingModel
-
- lookup = {'string': 0, 'is': 1, 'example': 2}
- params = dict(
- max_sequence_len=10,
- embedding_size=16,
- input_keep_prob=0.9,
- hidden_keep_prob=0.75
- )
- model = EncodingModel(token_hash=lookup, layers=8, **params)
-
- text = tf.constant([
- "sample string .",
- "this is a second example ."
- ])
- y_hat, time_steps, targets, context_vectors = model(text, training=True, return_vectors=True)
- ```
- """
-
- def __init__(self, token_hash, max_sequence_len, n_stacks=1, layers=8, num_hidden=64,
- input_keep_prob=1.0, hidden_keep_prob=1.0, embedding_size=64, recurrent=False, sep=' '):
- super().__init__()
-
- params = dict(
- max_sequence_len=max_sequence_len,
- embedding_size=embedding_size,
- input_keep_prob=input_keep_prob,
- hidden_keep_prob=hidden_keep_prob
- )
- self.embed_layer = TextInput(
- token_hash=token_hash,
- embedding_size=embedding_size,
- max_sequence_len=max_sequence_len
- )
- self.tokenizer = Tokenizer(sep)
-
- if recurrent:
- self.encode_layer = RecurrentEncoder(num_hidden=num_hidden, **params)
- self.decode_layer = RecurrentDecoder(num_hidden=num_hidden, **params)
- else:
- self.encode_layer = TransformerEncoder(n_stacks=n_stacks, layers=layers, **params)
- self.decode_layer = TransformerDecoder(n_stacks=n_stacks, layers=layers, **params)
-
- def call(self, sentences, training=False, return_vectors=False):
- tokens = self.tokenizer(sentences) # turn sentences into ragged tensors of tokens
-
- # turn incoming sentences into relevant tensor batches
- with tf.name_scope('Encoding'):
- x_enc, enc_mask, _ = self.embed_layer(tokens)
- if not training:
- return self.encode_layer(x_enc, mask=enc_mask, training=False)
- x_enc, context, *states = self.encode_layer(x_enc, mask=enc_mask, training=True)
-
- with tf.name_scope('Decoding'):
- batch_size = tokens.nrows()
-
- with tf.name_scope('targets'):
- eos = tf.fill([batch_size], value='', name='eos-tag')
- eos = tf.expand_dims(eos, axis=-1, name='eos-tag-expand')
-
- targets = tf.concat([tokens, eos], 1, name='eos-concat')
- targets = tf.ragged.map_flat_values(self.embed_layer.table.lookup, targets)
- targets = self.embed_layer.slicer(targets)
-
- with tf.name_scope('decode-tokens'):
- bos = tf.fill([batch_size], value='', name='bos-tag')
- bos = tf.expand_dims(bos, axis=-1, name='bos-tag-expand')
-
- dec_tokens = tf.concat([bos, tokens], -1, name='bos-concat')
- x_dec, dec_mask, dec_time_steps = self.embed_layer(dec_tokens)
- x_out = self.decode_layer(
- x_enc=x_enc,
- enc_mask=enc_mask,
- x_dec=x_dec,
- dec_mask=dec_mask,
- context=context,
- attention=self.encode_layer.attention,
- training=training,
- initial_state=states
- )
- x_out = tf.tensordot(x_out, self.embed_layer.embeddings, axes=[2, 1])
-
- if return_vectors:
- return x_out, dec_time_steps, targets.to_tensor(default_value=0), context
- return x_out, dec_time_steps, targets.to_tensor(default_value=0)
-
- @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def embed(self, sentences):
- """Takes batches of free text and returns context vectors for each example.
-
- Parameters
- ----------
- sentences : tf.Tensor
- Tensor of dtype tf.string.
-
- Returns
- -------
- tf.Tensor
- Context vectors of shape (batch_size, embedding_size)
- """
-
- tokens = self.tokenizer(sentences) # turn sentences into ragged tensors of tokens
- x_enc, enc_mask, _ = self.embed_layer(tokens)
- return self.encode_layer(x_enc, mask=enc_mask, training=False)
-
- @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def token_embed(self, sentences):
- """Takes batches of free text and returns word embeddings along with the associate token.
-
- Parameters
- ----------
- sentences : tf.Tensor
- Tensor of dtype tf.string.
-
- Returns
- -------
- (tf.Tensor, tf.Tensor)
- Tuple of (tokens, word_embeddings) with shapes (batch_size, max_sequence_len)
- and (batch_size, max_sequence_len, embedding_size) respectively.
- """
-
- tokens = self.tokenizer(sentences) # turn sentences into ragged tensors of tokens
- return tokens.to_tensor(''), self.embed_layer(tokens, output_embeddings=True).to_tensor(0)
-
-
-class ServingModel(tf.keras.Model):
- """Wrapper class for packaging final layers prior to saving.
-
- Parameters
- ----------
- embed_layer : Union[TokenEmbed, Embed]
- text2vec `TokenEmbed` or `Embed` layer
- encode_layer : Union[TransformerEncoder, RecurrentEncoder]
- text2vec `TransformerEncoder` or `RecurrentEncoder` layer
- tokenizer : Tokenizer
- text2vec `Tokenizer` layer
- """
-
- def __init__(self, embed_layer: Union[TokenEmbed, Embed],
- encode_layer: Union[TransformerEncoder, RecurrentEncoder], tokenizer: Tokenizer):
- super().__init__()
- self.embed_layer = embed_layer
- self.tokenizer = tokenizer
- self.encode_layer = encode_layer
-
- @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def embed(self, sentences) -> Dict[str, tf.Tensor]:
- """Takes batches of free text and returns context vectors for each example.
-
- Parameters
- ----------
- sentences : tf.Tensor
- Tensor of dtype tf.string.
-
- Returns
- -------
- Dict[str, tf.Tensor]
- Attention vector and hidden state sequences with shapes (batch_size, embedding_size)
- and (batch_size, max_sequence_len, embedding_size) respectively.
- """
-
- tokens = self.tokenizer(sentences) # turn sentences into ragged tensors of tokens
- x_enc, enc_mask, _ = self.embed_layer(tokens)
- sequences, attention = self.encode_layer(x_enc, mask=enc_mask, training=False)
- return {"sequences": sequences, "attention": attention}
-
- @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
- def token_embed(self, sentences) -> Dict[str, tf.Tensor]:
- """Takes batches of free text and returns word embeddings along with the associate token.
-
- Parameters
- ----------
- sentences : tf.Tensor
- Tensor of dtype tf.string.
-
- Returns
- -------
- Dict[str, tf.Tensor]
- Padded tokens and embedding vectors with shapes (batch_size, max_sequence_len)
- and (batch_size, max_sequence_len, embedding_size) respectively.
- """
-
- tokens = self.tokenizer(sentences) # turn sentences into ragged tensors of tokens
- return {
- "tokens": tokens.to_tensor('>'),
- "embeddings": self.embed_layer.get_embedding(tokens).to_tensor(0)
- }
-
-
-def sequence_cost(target_sequences, sequence_logits, num_labels, smoothing=False):
- """Sequence-to-sequence cost function with optional label smoothing.
-
- Parameters
- ----------
- target_sequences : tf.Tensor
- Expected token sequences as lookup IDs (batch_size, max_sequence_len)
- sequence_logits : [type]
- Computed logits for predicted tokens (batch_size, max_sequence_len, embedding_size)
- num_labels : int
- Vocabulary look up size
- smoothing : bool, optional
- Set to True to smooth labels, this increases regularization while increasing training time, by default False
-
- Returns
- -------
- tf.float32
- Loss value averaged over examples.
- """
-
- with tf.name_scope('Cost'):
- if smoothing:
- smoothing = 0.1
- targets = tf.one_hot(target_sequences, depth=num_labels, on_value=1.0, off_value=0.0, axis=-1)
- loss = tf.keras.losses.binary_crossentropy(
- y_true=targets,
- y_pred=sequence_logits,
- from_logits=True,
- label_smoothing=smoothing
- )
- else:
- loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=sequence_logits, labels=target_sequences)
-
- loss = tf.reduce_mean(loss)
- return loss
-
-
-def vector_cost(context_vectors):
- """Cost constraint on the cosine similarity of context vectors. Diagonal elements (self-context)
- are coerced to be closer to 1 (self-consistency). Off-diagonal elements are pushed toward 0,
- indicating not contextually similar.
-
- Parameters
- ----------
- context_vectors : tf.Tensor
- (batch_size, embedding_size)
-
- Returns
- -------
- tf.float32
- cosine similarity constraint loss
- """
-
- with tf.name_scope('VectorCost'):
- rows = tf.shape(context_vectors)[0]
- context_vectors = tf.linalg.l2_normalize(context_vectors, axis=-1)
- cosine = tf.tensordot(context_vectors, tf.transpose(context_vectors), axes=[1, 0])
- identity = tf.eye(rows)
- return tf.reduce_mean((identity - cosine) ** 2)