Skip to content

Variational Autoencoder that leverages the Critic-Model (linked below).

Notifications You must be signed in to change notification settings

lcicek/Critic-VAE

Repository files navigation

Critic-Variational Autoencoder



The critic-value is a probability that serves as a prediction about whether a tree trunk is present in the image. This VAE-model is trained on the minerl-images and corresponding critic-predictions and yields a decoder that can (to a certain degree) enhance or remove tree trunks in an image.

Setup:

1. Clone this repository.
2. Create a conda environment with the necessary packages:
        conda create --name critvae --file requirements.txt -c conda-forge -c pytorch
3. Activate the conda environment:
        conda activate critvae
3. Separately install SimpleCRF:
        pip install SimpleCRF
4. Separately install minerl:
        pip install gym==0.19.0 minerl==0.3.6
5. Find MINERL_DATA_ROOT.
5.1. MINERL_DATA_ROOT should be located in anaconda-folder: anaconda3/envs/critvae/lib/python3.6/site-packages/minerl
5.2. Find your complete PATH to the MINERL_DATA_ROOT.
5.3. Set MINERL_DATA_ROOT environment variable and download MineRLTreechop-v0 environment:
        MINERL_DATA_ROOT=PATH (replace PATH with yours)
        export MINERL_DATA_ROOT
        python3 -m minerl.data.download "MineRLTreechop-v0" (command might differ, see: minerl-docs)
5.4. Replace your PATH with MINERL_DATA_ROOT_PATH in vae_parameters.py.

How to run this model:

1. To train the model (results can be found in 'saved-networks/'), run:
        python vae.py -train
2. To evaluate the model on the source-images (results can be found in 'images/'), run:
        python vae.py
3. To create the mask-video shown above (results can be found in 'videos/'), run:
        python vae.py -video
The original minerl-episode is minerl-episode/X.npy and the ground-truth masks are minerl-episode/Y.npy, which are np.arrays containing the 1200 episode images.
Apart from the video, additional information is saved in a bin_info.txt file that is created.

Other functions of the model:

1. To create a dataset of the reconstructions of the Critic-VAE, run:
        python vae.py -dataset
The dataset is saved as recon-dataset.pickle and is quite large (~5GB).
This reconstruction-dataset was unsuccessful so far. To improve it, changes have to be made to the load_minerl_data() function in vae_utility.py.
2. To see the differences of the injected critic-values (results can be found in 'inject/'), run:
        python vae.py -inject
3. To test out different thresholds for the thresholded-masks in the video, run:
        python vae.py -video -thresh
The Intersection over Union (IoU) values of the masks will be printed in the terminal.
4. To train a second Critic-VAE on the reconstruction-dataset (recon-dataset.pickle), run:
        python vae.py -second
        To evaluate it (like above): python vae.py -evalsecond
The second Critic-VAE idea was unsuccessful so far as well. Code might be unstable but should be easily fixed with some small changes.

About

Variational Autoencoder that leverages the Critic-Model (linked below).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages