- An automated pipeline for embedding column data from CSVs and indexing the embeddings to OpenSearch.
- A web app enabling users to search for the approximate nearest neighbors to a provided input.
Services used:
- AWS Step Functions
- AWS Glue
- Amazon SageMaker Processing
- AWS Lambda
- Amazon OpenSearch Service
- Amazon S3
- Amazon ECR
- AWS Fargate
- Application Load Balancer (ALB)
Embeddings are created using SentenceTransformers. By default the following models are used:
all-MiniLM-L6-v2
all-distilroberta-v1
average_word_embeddings_glove.6B.300d
- Customize email, username, and any other desired configs in config.yaml.
- Deploy resources by following the steps below. Recommended: Deploy CDK from a cloud based instance such as EC2 or Cloud9.
- Once deployed, upload CSV files with column headings to the
data/csv/input/file
ordata/csv/input/batch
paths of the S3 bucket created during deployment. Files uploaded todata/csv/input/file
will be individually processed automatically upon upload. Files uploaded todata/csv/input/batch
will be processed in batch when the pipeline is manually triggered. During pipeline execution, input data will be automatically embedded and indexed to OpenSearch. After successful indexing, input data is moved todata/csv/processed/
. You can track the pipeline status in the Step Function State Machine console.- To upload batch CSV files, run the script run_pipeline.py from the commandline:
The default options for the script will upload sample batch datasets from sample-batch-datasets.json to the S3 bucket (
<DESTINATION_BUCKET>/data/csv/input/batch
). And invoke the Lambda function that starts pipeline.python tools/run_pipeline.py --destination_bucket <DESTINATION_BUCKET> --input_mode batch --batch_datasets_file sample-batch-datasets.json
- To upload a single CSV file, run the same script run_pipeline.py with the following
python tools/run_pipeline.py --destination_bucket <DESTINATION_BUCKET> --input_mode file --file_or_url <LOCAL_OR_REMOTE_CSV_PATH>
- To upload batch CSV files, run the script run_pipeline.py from the commandline:
The default options for the script will upload sample batch datasets from sample-batch-datasets.json to the S3 bucket (
- After deployment, you will receive sign-in credentials and the web app URL via email, at the email you specified in config.yaml. Log in to the web app using these credentials. You will be prompted to reset your password during the first login.
- Note the demo creates and uses a self-signed certificate for the web app, which may not be trusted by your web browser by default. Self-signed certificates should not be used beyond testing. For best security, use a certificate signed be a credible CA.
- Use the web app to query OpenSearch and explore results.
Create a virtual environment:
$ python3 -m venv .venv
Activate your virtualenv:
$ source .venv/bin/activate
Install the required dependencies:
$ pip install --upgrade pip
$ pip install -r requirements.txt
At this point you can synthesize the CloudFormation template for this code:
$ cdk synth
Bootstrap your default AWS account/region. Note you may incur AWS charges for data stored in the bootstrapped resources.
$ cdk bootstrap
Deploy the pipeline to your default AWS account/region. Note Docker needs to be running in the background. During deployment, you will be prompted to confirm deployment of each stack. Resources will incur charges in your account while deployed.
$ cdk deploy --all
To tear down the pipeline, run the following aptly named command. You will be prompted to confirm deletion.
$ cdk destroy --all
cdk ls
list all stacks in the appcdk synth
emits the synthesized CloudFormation templatecdk deploy
deploy this stack to your default AWS account/regioncdk diff
compare deployed stack with current statecdk docs
open CDK documentationcdk destroy
destroy existing stack