diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..232e03e
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,19 @@
+.idea
+*__pycache__*
+*.obj
+.history*
+*.pyc
+cache/
+log/
+log2/
+*.npy
+logs/
+*.so
+*.sh
+3d-semantic-segmentation.wiki/
+
+!experiments/
+
+experiments/*
+!experiments/iccvw_paper_2017/
+*.out
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..b5f593a
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2018 Jonas Schult
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..8bc36a6
--- /dev/null
+++ b/README.md
@@ -0,0 +1,101 @@
+# Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds
+Created by Francis Engelmann, Theodora Kontogianni, Alexander Hermans, Jonas Schult and Bastian Leibe
+from RWTH Aachen University.
+
+![prediction example](doc/exploring_header.png?raw=True "dfdf")
+
+### Introduction
+This work is based on our paper
+[Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds](https://www.vision.rwth-aachen.de/media/papers/PID4967025.pdf),
+which appeared at the IEEE International Conference on Computer Vision (ICCV) 2017, 3DRMS Workshop.
+
+You can also check our [project page](https://www.vision.rwth-aachen.de/page/3dsemseg) for further details.
+
+Deep learning approaches have made tremendous progress in the field of semantic segmentation over the past few years. However, most current approaches operate in the 2D image space. Direct semantic segmentation of unstructured 3D point clouds is still an open research problem. The recently proposed PointNet architecture presents an interesting step ahead in that it can operate on unstructured point clouds, achieving decent segmentation results. However, it subdivides the input points into a grid of blocks and processes each such block individually. In this paper, we investigate the question how such an architecture can be extended to incorporate larger-scale spatial context. We build upon PointNet and propose two extensions that enlarge the receptive field over the 3D scene. We evaluate the proposed strategies on challenging indoor and outdoor datasets and show improved results in both scenarios.
+
+In this repository, we release code for training and testing various pointcloud semantic segmentation networks on
+arbitrary datasets.
+
+### Citation
+If you find our work useful in your research, please consider citing:
+
+ @inproceedings{3dsemseg_ICCVW17,
+ author = {Francis Engelmann and
+ Theodora Kontogianni and
+ Alexander Hermans and
+ Bastian Leibe},
+ title = {Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds},
+ booktitle = {{IEEE} International Conference on Computer Vision, 3DRMS Workshop, {ICCV}},
+ year = {2017}
+ }
+
+
+### Installation
+
+Install TensorFlow.
+The code has been tested with Python 3.6 and TensorFlow 1.8.
+
+### Usage
+In order to get more representative blocks, it is encouraged to uniformly downsample the original point clouds.
+This is done via the following script:
+
+ python tools/downsample.py --data_dir path/to/dataset --cell_size 0.03
+
+This statement will produce pointclouds where each point will be representative for its 3cm x 3cm x 3cm neighborhood.
+
+To train/test a model for semantic segmentation on pointclouds, you need to run:
+
+ python run.py --config path/to/config/file.yaml
+
+Detailed instruction of the structure for the yaml config file can be found in the wiki.
+Additionally, some example configuration files are given in the folder `experiments`.
+
+Note that the final evaluation is done on the full sized point clouds using k-nn interpolation.
+
+### Reproducing the scores of our paper for stanford indoor 3d
+
+#### Downloading the data set
+First of all, Stanford Large-Scale 3D Indoor Spaces Dataset has to be downloaded.
+Follow the instructions [here](https://docs.google.com/forms/d/e/1FAIpQLScDimvNMCGhy_rmBA2gHfDu3naktRm6A8BPwAWWDv-Uhm6Shw/viewform?c=0&w=1).
+The aligned version 1.2 is used for our results.
+
+#### Producing numpy files from the original dataset
+Our pipeline cannot handle the original file type of s3dis. So, we need to convert it to npy files.
+Note that Area_5/hallway_6 has to be fixed manually due to format inconsistencies.
+
+ python tools/prepare_s3dis.py --input_dir path/to/dataset --output_dir path/to/output
+
+#### Downsampling for training
+Before training, we downsampled the pointclouds.
+
+ python tools/downsample.py --data_dir path/to/dataset --cell_size 0.03
+
+#### Training configuration scripts
+Configuration files for all experiments are located in `experiments/iccvw_paper_2017/*`. For example, they can be
+launched as follows:
+
+ python run.py --config experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_1.yaml
+
+The above script will run our multi scale consolidation unit network on stanford indoor 3d with test area 1.
+
+#### Evaluating on full scale point clouds
+Reported scores on the dataset are based on the full scale pointclouds.
+In order to do so, we need to load the trained model and set the `TEST` flag.
+
+Replace `modus: TRAIN_VAL` with
+
+```yaml
+ modus: TEST
+ model_path: 'path/to/trained/model/model_ckpts'
+```
+which is located in the log directory specified for training.
+
+
+### VKitti instructions
+* coming soon ...
+
+### Trained models for downloading
+* Coming soon ...
+
+### License
+Our code is released under MIT License (see LICENSE file for details).
\ No newline at end of file
diff --git a/batch_generators/ReadWriteLock.py b/batch_generators/ReadWriteLock.py
new file mode 100644
index 0000000..c390842
--- /dev/null
+++ b/batch_generators/ReadWriteLock.py
@@ -0,0 +1,45 @@
+"""
+copyright by:
+https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch06s04.html
+"""
+
+import threading
+
+
+class ReadWriteLock:
+ """ A lock object that allows many simultaneous "read locks", but
+ only one "write lock." """
+
+ def __init__(self):
+ self._read_ready = threading.Condition(threading.Lock())
+ self._readers = 0
+
+ def acquire_read(self):
+ """ Acquire a read lock. Blocks only if a thread has
+ acquired the write lock. """
+ self._read_ready.acquire()
+ try:
+ self._readers += 1
+ finally:
+ self._read_ready.release()
+
+ def release_read(self):
+ """ Release a read lock. """
+ self._read_ready.acquire()
+ try:
+ self._readers -= 1
+ if not self._readers:
+ self._read_ready.notifyAll()
+ finally:
+ self._read_ready.release()
+
+ def acquire_write(self):
+ """ Acquire a write lock. Blocks until there are no
+ acquired read or write locks. """
+ self._read_ready.acquire()
+ while self._readers > 0:
+ self._read_ready.wait()
+
+ def release_write(self):
+ """ Release a write lock. """
+ self._read_ready.release()
diff --git a/batch_generators/__init__.py b/batch_generators/__init__.py
new file mode 100644
index 0000000..e7d9a79
--- /dev/null
+++ b/batch_generators/__init__.py
@@ -0,0 +1,4 @@
+from batch_generators.batch_generator import *
+from batch_generators.center_batch_generator import *
+from batch_generators.multi_scale_batch_generator import *
+from batch_generators.neighboring_grid_batch_generator import *
diff --git a/batch_generators/batch_generator.py b/batch_generators/batch_generator.py
new file mode 100644
index 0000000..92fcb9d
--- /dev/null
+++ b/batch_generators/batch_generator.py
@@ -0,0 +1,156 @@
+from abc import *
+import numpy as np
+import tensorflow as tf
+import itertools
+from tools.lazy_decorator import *
+
+
+class BatchGenerator(ABC):
+ """
+ Abstract base class for batch generators providing the code for parallel creation of batches
+ """
+
+ def __init__(self, dataset, batch_size, num_points, augmentation):
+ """
+ :param dataset: dataset object
+ :type dataset: Dataset
+ :param num_points: number of points in a batch
+ :type num_points: int
+ """
+ self.dataset = dataset
+ self._num_points = num_points
+ self._batch_size = batch_size
+ self._augmentation = augmentation
+
+ @lazy_property
+ def handle_pl(self):
+ # Handle for datasets
+ return tf.placeholder(tf.string, shape=[], name='handle_training_test')
+
+ @lazy_property
+ def next_element(self):
+ iterator = tf.data.Iterator.from_string_handle(self.handle_pl, self.dataset_train.output_types)
+ return iterator.get_next()
+
+ @lazy_property
+ def dataset_train(self):
+ # Create dataset for training
+ dataset_train = tf.data.Dataset.from_generator(self._next_train_index, tf.int64, tf.TensorShape([]))
+ dataset_train = dataset_train.map(self._wrapped_generate_train_blob, num_parallel_calls=8)
+ dataset_train = dataset_train.batch(self._batch_size)
+ return dataset_train.prefetch(buffer_size=self._batch_size * 1)
+
+ @lazy_property
+ def iterator_train(self):
+ return self.dataset_train.make_one_shot_iterator()
+
+ @lazy_property
+ def iterator_test(self):
+ # Create dataset for testing
+ dataset_test = tf.data.Dataset.from_generator(self._next_test_index, tf.int64, tf.TensorShape([]))
+ dataset_test = dataset_test.map(self._wrapped_generate_test_blob, num_parallel_calls=8)
+ dataset_test = dataset_test.batch(self._batch_size)
+ dataset_test = dataset_test.prefetch(buffer_size=self._batch_size * 1)
+ return dataset_test.make_one_shot_iterator()
+
+ @property
+ def batch_size(self):
+ return self._batch_size
+
+ @property
+ def num_points(self):
+ return self._num_points
+
+ @property
+ def input_shape(self):
+ return self.pointclouds_pl.get_shape().as_list()
+
+ @lazy_property
+ @abstractmethod
+ def pointclouds_pl(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ @lazy_property
+ @abstractmethod
+ def labels_pl(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ @lazy_property
+ @abstractmethod
+ def mask_pl(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ @lazy_property
+ @abstractmethod
+ def cloud_ids_pl(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ @lazy_property
+ @abstractmethod
+ def point_ids_pl(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ def _next_train_index(self):
+ """
+ get next index in training sample list (e.g. containing [pointcloud_id, center_x, center_y])
+ Take care that list is shuffled for each epoch!
+ :return: next index for training
+ """
+ for i in itertools.cycle(range(self.train_sample_idx.shape[0])):
+ yield (i)
+
+ def _next_test_index(self):
+ """
+ get next index in test sample list (e.g. containing [pointcloud_id, center_x, center_y])
+ Take care that list is shuffled for each epoch!
+ :return: next index for test
+ """
+ for i in itertools.cycle(range(self.test_sample_idx.shape[0])):
+ yield (i)
+
+ def _wrapped_generate_train_blob(self, index):
+ return tf.py_func(func=self._generate_blob,
+ # pos_id, train, aug_trans
+ inp=[index, True, self._augmentation],
+ # data labels mask
+ Tout=(tf.float32, tf.int8, tf.int8, tf.int32, tf.int64),
+
+ name='generate_train_blob')
+
+ def _wrapped_generate_test_blob(self, index):
+ return tf.py_func(func=self._generate_blob,
+ # pos_id, train, aug_trans
+ inp=(index, False, False),
+ # data labels mask cloud_id point_id
+ Tout=(tf.float32, tf.int8, tf.int8, tf.int32, tf.int64),
+ name='generate_test_blob')
+
+ @property
+ def num_train_batches(self):
+ return self.train_sample_idx.shape[0] // self._batch_size
+
+ @property
+ def num_test_batches(self):
+ return int(np.ceil(self.test_sample_idx.shape[0] // self._batch_size))
+
+ @property
+ @abstractmethod
+ def test_sample_idx(self):
+ """
+ :rtype: ndarray
+ :return:
+ """
+ raise NotImplementedError('Should be defined in subclass')
+
+ @property
+ @abstractmethod
+ def train_sample_idx(self):
+ """
+ :rtype: ndarray
+ :return:
+ """
+ raise NotImplementedError('Should be defined in subclass')
+
+ @abstractmethod
+ def _generate_blob(self, index, train=True, aug_trans=True):
+ raise NotImplementedError('Should be defined in subclass')
diff --git a/batch_generators/center_batch_generator.py b/batch_generators/center_batch_generator.py
new file mode 100644
index 0000000..cfba681
--- /dev/null
+++ b/batch_generators/center_batch_generator.py
@@ -0,0 +1,135 @@
+from tqdm import *
+from sklearn.neighbors import BallTree
+from .batch_generator import *
+from .ReadWriteLock import ReadWriteLock
+
+
+class CenterBatchGenerator(BatchGenerator):
+ """
+ creates batches where the blobs are centered around a specific point in the grid cell
+ """
+
+ def __init__(self, dataset, batch_size, num_points, grid_spacing,
+ augmentation=False, metric='euclidean'):
+ self._rw_lock = ReadWriteLock()
+
+ super().__init__(dataset, batch_size, num_points, augmentation)
+
+ self._grid_spacing = grid_spacing
+
+ self._shuffle_train = True
+ self._shuffle_test = True
+
+ # TODO make min_num_points_per_block variable
+ self.min_num_points_per_block = 5
+
+ self._ball_trees = self._calc_ball_trees(metric=metric)
+ self._pc_center_pos = self._generate_center_positions()
+
+ @property
+ def ball_trees(self):
+ return self._ball_trees
+
+ @property
+ def pc_center_pos(self):
+ return self._pc_center_pos
+
+ @property
+ def test_sample_idx(self):
+ # after shuffling we need to recalculate the indices
+ if self._shuffle_test:
+ self._test_sample_idx_cache = self.pc_center_pos[np.isin(self.pc_center_pos[:, 0], self.dataset.test_pc_idx)]
+ self._shuffle_test = False
+
+ return self._test_sample_idx_cache
+
+ @property
+ def train_sample_idx(self):
+ # after shuffling we need to recalculate the indices
+ if self._shuffle_train:
+ self._train_sample_idx_cache = self.pc_center_pos[np.isin(self.pc_center_pos[:, 0], self.dataset.train_pc_idx)]
+ self._shuffle_train = False
+
+ return self._train_sample_idx_cache
+
+ @staticmethod
+ def _sample_data(data, num_sample):
+ """ data is in N x ...
+ we want to keep num_samplexC of them.
+ if N > num_sample, we will randomly keep num_sample of them.
+ if N < num_sample, we will randomly duplicate samples.
+ """
+ N = data.shape[0]
+ if N == num_sample:
+ return data, range(N)
+ elif N > num_sample:
+ sample = np.random.choice(N, num_sample)
+ return data[sample, ...], sample
+ else:
+ sample = np.random.choice(N, num_sample - N)
+ dup_data = data[sample, ...]
+ return np.concatenate([data, dup_data], 0), list(range(N)) + (list(sample))
+
+ def _calc_ball_trees(self, metric='euclidean'):
+ ball_trees = []
+ for pointcloud_data in tqdm(self.dataset.data, desc='Ball trees have to be calculated from scratch'):
+ ball_trees.append(BallTree(pointcloud_data[:, :2], metric=metric))
+ return ball_trees
+
+ @abstractmethod
+ def _generate_blob(self, index, train=True, aug_trans=True):
+ raise NotImplementedError('Should be defined in subclass')
+
+ def shuffle(self):
+ self._rw_lock.acquire_write()
+
+ try:
+ self._shuffle_train = True
+ self._shuffle_test = True
+
+ # Randomly shuffle training data from epoch
+ np.random.shuffle(self.pc_center_pos)
+ finally:
+ self._rw_lock.release_write()
+
+ def _generate_center_positions(self):
+ """
+ Generate blob positions while making sure the grid stays inside the pointcloud bounding box
+ :return: (point_cloud_id, pos_x, pos_y)
+ """
+ room_pos_list = []
+ for room_id, room_data in enumerate(tqdm(self.dataset.data, desc='Calculate center positions')):
+ room_maxs = np.amax(room_data[:, 0:2], axis=0)
+ room_mins = np.amin(room_data[:, 0:2], axis=0)
+ room_size = room_maxs - room_mins
+ num_blobs = np.ceil(room_size / self._grid_spacing)
+ num_blobs = num_blobs - np.array([1, 1]) + 1
+
+ if num_blobs[0] <= 0:
+ num_blobs[0] = 1
+ if num_blobs[1] <= 0:
+ num_blobs[1] = 1
+
+ ctrs = [[room_mins[0] + x * self._grid_spacing + self._grid_spacing / 2.0,
+ room_mins[1] + y * self._grid_spacing + self._grid_spacing / 2.0]
+ for x in range(int(num_blobs[0]))
+ for y in range(int(num_blobs[1]))]
+
+ blob_point_ids_all = self.ball_trees[room_id].query_radius(np.reshape(ctrs, [-1, 2]),
+ r=self._grid_spacing / 2.0)
+
+ blob_point_ids_all = np.reshape(blob_point_ids_all, [-1, 1])
+
+ ctrs = np.reshape(ctrs, [-1, 1, 2])
+ for i in range(np.shape(ctrs)[0]):
+ npoints = 0
+ for j in range(np.shape(ctrs)[1]):
+ npoints = npoints + np.shape(blob_point_ids_all[i, j])[0]
+ if npoints >= self.min_num_points_per_block: # TODO CHECK IF MINPOINTS 5 IS GOOD
+ room_pos_list.append(np.reshape(np.array([room_id, ctrs[i, 0, 0], ctrs[i, 0, 1]]), (1, 3)))
+
+ return np.concatenate(room_pos_list)
+
+
+if __name__ == '__main__':
+ pass
diff --git a/batch_generators/multi_scale_batch_generator.py b/batch_generators/multi_scale_batch_generator.py
new file mode 100644
index 0000000..72be087
--- /dev/null
+++ b/batch_generators/multi_scale_batch_generator.py
@@ -0,0 +1,120 @@
+from .center_batch_generator import *
+import numpy as np
+
+
+class MultiScaleBatchGenerator(CenterBatchGenerator):
+ """
+ Batch Generator for multi scale batches of different radii where the centers are equal
+ """
+
+ def __init__(self, dataset, params):
+ super().__init__(dataset=dataset,
+ num_points=params['num_points'],
+ batch_size=params['batch_size'],
+ grid_spacing=params['grid_spacing'],
+ augmentation=params['augmentation'],
+ metric=params['metric'])
+
+ self._radii = params['radii']
+
+ @lazy_property
+ def pointclouds_pl(self):
+ return tf.reshape(self.next_element[0], (self._batch_size, len(self._radii),
+ self._num_points, self.dataset.num_features + 3))
+
+ @lazy_property
+ def labels_pl(self):
+ return tf.reshape(self.next_element[1], (self._batch_size, 1, self._num_points))
+
+ @lazy_property
+ def mask_pl(self):
+ return tf.reshape(self.next_element[2], (self._batch_size, 1))
+
+ @lazy_property
+ def cloud_ids_pl(self):
+ return tf.reshape(self.next_element[3], (self._batch_size, 1))
+
+ @lazy_property
+ def point_ids_pl(self):
+ return tf.reshape(self.next_element[4], (self._batch_size, 1, self._num_points))
+
+ def _generate_blob(self, index, train=True, aug_trans=True):
+ b_data = np.zeros((len(self._radii), self._num_points, self.dataset.num_features + 3), dtype=np.float32)
+ b_label = np.zeros((len(self._radii), self._num_points), dtype=np.int8)
+ b_mask = np.ones(len(self._radii), dtype=np.int8)
+ b_cloud_ids = np.zeros(len(self._radii), dtype=np.int32)
+ b_point_ids = np.zeros((len(self._radii), self._num_points), dtype=np.int64)
+
+ self._rw_lock.acquire_read()
+
+ try:
+ if train:
+ [pointcloud_id, grid_center_x, grid_center_y] = np.copy(self.train_sample_idx[index, :])
+ else:
+ [pointcloud_id, grid_center_x, grid_center_y] = np.copy(self.test_sample_idx[index, :])
+ finally:
+ self._rw_lock.release_read()
+
+ pointcloud_id = int(pointcloud_id)
+ pointcloud_data = self.dataset.data[pointcloud_id]
+
+ max_x, max_y, max_z = np.amax(pointcloud_data[:, 0:3], axis=0)
+ min_x, min_y, min_z = np.amin(pointcloud_data[:, 0:3], axis=0)
+
+ diff_x = max_x - min_x
+ diff_y = max_y - min_y
+ diff_z = max_z - min_z
+ noise_x = 0
+ noise_y = 0
+
+ if aug_trans:
+ noise_x = np.random.uniform(-self._grid_spacing / 2.0, self._grid_spacing / 2.0)
+ noise_y = np.random.uniform(-self._grid_spacing / 2.0, self._grid_spacing / 2.0)
+
+ ctr_x = grid_center_x + noise_x
+ ctr_y = grid_center_y + noise_y
+ ctr = np.array([ctr_x, ctr_y])
+
+ ctrs = [ctr for _ in range(len(self._radii))]
+
+ blob_point_ids_all = self.ball_trees[pointcloud_id].query_radius(ctrs, r=self._radii)
+
+ for radius_id, radius in enumerate(self._radii):
+ blob_point_ids = np.array(blob_point_ids_all[radius_id])
+ blob = pointcloud_data[blob_point_ids, :]
+
+ if blob.shape[0] < self.min_num_points_per_block: # check here if it is empty, set mask to zero
+ b_mask[radius_id] = 0
+ else: # blob is not empty
+ blob, point_ids = self._sample_data(blob, self._num_points)
+
+ # apply normalizations to blob
+ blob[:, :self.dataset.num_features] /= self.dataset.normalization
+
+ # Normalized coordinates
+ additional_feats = np.zeros((self._num_points, 3))
+ blob = np.concatenate((blob, additional_feats), axis=-1)
+
+ blob[:, -1] = blob[:, self.dataset.num_features] # put label to the end
+ blob[:, self.dataset.num_features] = blob[:, 0] / diff_x
+ blob[:, self.dataset.num_features+1] = blob[:, 1] / diff_y
+ blob[:, self.dataset.num_features+2] = blob[:, 2] / diff_z
+ blob[:, 0:2] -= ctr
+
+ b_label[radius_id, :] = blob[:, -1]
+ b_data[radius_id, :, :] = blob[:, :-1]
+ b_point_ids[radius_id, :] = blob_point_ids[point_ids]
+ b_cloud_ids[radius_id] = pointcloud_id
+
+ # reference radius
+ b_label = np.reshape(b_label[1, :], (1, b_label.shape[1]))
+ b_mask = np.reshape(b_mask[1], (1))
+ b_point_ids = np.reshape(b_point_ids[1, :],
+ (1, b_point_ids.shape[1]))
+ b_cloud_ids = np.reshape(b_cloud_ids[1], (1))
+
+ return b_data, b_label, b_mask, b_cloud_ids, b_point_ids
+
+
+if __name__ == '__main__':
+ pass
diff --git a/batch_generators/neighboring_grid_batch_generator.py b/batch_generators/neighboring_grid_batch_generator.py
new file mode 100644
index 0000000..4e6dc15
--- /dev/null
+++ b/batch_generators/neighboring_grid_batch_generator.py
@@ -0,0 +1,140 @@
+from .center_batch_generator import *
+
+
+class NeighboringGridBatchGenerator(CenterBatchGenerator):
+ """
+ neighboring grid batch generator where different blobs are placed next to each other
+ """
+
+ def __init__(self, dataset, params):
+ print(params)
+ super().__init__(dataset=dataset,
+ num_points=params['num_points'],
+ batch_size=params['batch_size'],
+ grid_spacing=params['grid_spacing'],
+ augmentation=params['augmentation'],
+ metric=params['metric'])
+
+ self._grid_x = params['grid_x']
+ self._grid_y = params['grid_y']
+
+ self._radius = params['radius']
+
+ # flattened version of grid
+ self._num_of_blobs = self._grid_x * self._grid_y
+
+ @property
+ def num_of_blobs(self):
+ return self._num_of_blobs
+
+ @lazy_property
+ def pointclouds_pl(self):
+ return tf.reshape(self.next_element[0], (self._batch_size, self._num_of_blobs,
+ self._num_points, self.dataset.num_features + 3))
+
+ @lazy_property
+ def labels_pl(self):
+ return tf.reshape(self.next_element[1], (self._batch_size, self._num_of_blobs, self._num_points))
+
+ @lazy_property
+ def mask_pl(self):
+ return tf.reshape(self.next_element[2], (self._batch_size, self._num_of_blobs))
+
+ @lazy_property
+ def cloud_ids_pl(self):
+ return tf.reshape(self.next_element[3], (self._batch_size, self._num_of_blobs))
+
+ @lazy_property
+ def point_ids_pl(self):
+ return tf.reshape(self.next_element[4], (self._batch_size, self._num_of_blobs, self._num_points))
+
+ def _generate_blob(self, index, train=True, aug_trans=True):
+ b_data = np.zeros((self._grid_x, self._grid_y, self._num_points, self.dataset.num_features + 3), dtype=np.float32)
+ b_label = np.zeros((self._grid_x, self._grid_y, self._num_points), dtype=np.int8)
+ b_mask = np.ones((self._grid_x, self._grid_y), dtype=np.int8)
+ b_cloud_ids = np.zeros((self._grid_x, self._grid_y), dtype=np.int32)
+ b_point_ids = np.zeros((self._grid_x, self._grid_y, self._num_points), dtype=np.int64)
+
+ self._rw_lock.acquire_read()
+
+ try:
+ if train:
+ [pointcloud_id, grid_center_x, grid_center_y] = np.copy(self.train_sample_idx[index, :])
+ else:
+ [pointcloud_id, grid_center_x, grid_center_y] = np.copy(self.test_sample_idx[index, :])
+ finally:
+ self._rw_lock.release_read()
+
+ pointcloud_id = int(pointcloud_id)
+ pointcloud_data = self.dataset.data[pointcloud_id]
+ max_x, max_y, max_z = np.amax(pointcloud_data[:, 0:3], axis=0)
+ min_x, min_y, min_z = np.amin(pointcloud_data[:, 0:3], axis=0)
+
+ diff_x = max_x - min_x
+ diff_y = max_y - min_y
+ diff_z = max_z - min_z
+
+ noise_x = 0
+ noise_y = 0
+
+ num_points = 0
+
+ while num_points < self.min_num_points_per_block:
+ if aug_trans:
+ noise_x = np.random.uniform(-self._grid_spacing / 2.0, self._grid_spacing / 2.0)
+ noise_y = np.random.uniform(-self._grid_spacing / 2.0, self._grid_spacing / 2.0)
+
+ # Create centers
+ ctrs = []
+ for grid_x in range(self._grid_x):
+ ctr_x = grid_center_x + grid_x * self._grid_spacing + noise_x
+ for grid_y in range(self._grid_y):
+ ctr_y = grid_center_y + grid_y * self._grid_spacing + noise_y
+ ctr = np.array([ctr_x, ctr_y])
+ ctrs.append(ctr)
+
+ blob_point_ids_all = self.ball_trees[pointcloud_id].query_radius(ctrs, r=self._radius)
+ num_points = len(blob_point_ids_all[0])
+
+ curr_id = -1
+ for grid_x in range(self._grid_x):
+ ctr_x = grid_center_x + grid_x * self._grid_spacing + noise_x
+ for grid_y in range(self._grid_y):
+ ctr_y = grid_center_y + grid_y * self._grid_spacing + noise_y
+ ctr = np.array([ctr_x, ctr_y])
+
+ curr_id += 1
+
+ # New
+ blob_point_ids = blob_point_ids_all[curr_id]
+ blob = np.copy(pointcloud_data[blob_point_ids, :])
+
+ if blob.shape[0] < self.min_num_points_per_block: # check here if it is empty, set mask to zero
+ b_mask[grid_x, grid_y] = 0
+ else: # blob is not empty
+ blob, point_ids = self._sample_data(blob, self._num_points)
+
+ # apply normalizations to blob
+ blob[:, :self.dataset.num_features] /= self.dataset.normalization
+
+ # Normalized coordinates
+ additional_feats = np.zeros((self._num_points, 3))
+ blob = np.concatenate((blob, additional_feats), axis=-1)
+
+ blob[:, -1] = blob[:, self.dataset.num_features] # put label to the end
+ blob[:, self.dataset.num_features] = blob[:, 0] / diff_x
+ blob[:, self.dataset.num_features + 1] = blob[:, 1] / diff_y
+ blob[:, self.dataset.num_features + 2] = blob[:, 2] / diff_z
+ blob[:, 0:2] -= ctr
+
+ b_label[grid_x, grid_y, :] = blob[:, -1]
+ b_data[grid_x, grid_y, :, :] = blob[:, :-1]
+ b_point_ids[grid_x, grid_y, :] = blob_point_ids[point_ids]
+
+ b_cloud_ids[grid_x, grid_y] = pointcloud_id
+
+ return b_data, b_label, b_mask, b_cloud_ids, b_point_ids
+
+
+if __name__ == '__main__':
+ pass
diff --git a/datasets/__init__.py b/datasets/__init__.py
new file mode 100644
index 0000000..caa4d6d
--- /dev/null
+++ b/datasets/__init__.py
@@ -0,0 +1 @@
+from datasets.general_dataset import *
diff --git a/datasets/color_constants.py b/datasets/color_constants.py
new file mode 100644
index 0000000..7b448ef
--- /dev/null
+++ b/datasets/color_constants.py
@@ -0,0 +1,1144 @@
+"""
+Provide RGB color constants and a colors dictionary with
+elements formatted: colors[colorname] = CONSTANT
+
+adapted from: https://www.webucator.com/blog/2015/03/python-color-constants-module/
+"""
+
+from collections import namedtuple, OrderedDict
+import numpy as np
+
+Color = namedtuple('RGB', 'red, green, blue')
+colors = {} # dict of colors
+
+
+class RGB(Color):
+
+ def hex_format(self):
+ """
+ Returns color in hex format
+ """
+ return '#{:02X}{:02X}{:02X}'.format(self.red, self.green, self.blue)
+
+ @property
+ def npy(self):
+ """
+ Returns npy formatted rgb color
+ :return:
+ """
+ return np.array([self.red, self.green, self.blue])
+
+
+# Color Contants
+ALICEBLUE = RGB(240, 248, 255)
+ANTIQUEWHITE = RGB(250, 235, 215)
+ANTIQUEWHITE1 = RGB(255, 239, 219)
+ANTIQUEWHITE2 = RGB(238, 223, 204)
+ANTIQUEWHITE3 = RGB(205, 192, 176)
+ANTIQUEWHITE4 = RGB(139, 131, 120)
+AQUA = RGB(0, 255, 255)
+AQUAMARINE1 = RGB(127, 255, 212)
+AQUAMARINE2 = RGB(118, 238, 198)
+AQUAMARINE3 = RGB(102, 205, 170)
+AQUAMARINE4 = RGB(69, 139, 116)
+AZURE1 = RGB(240, 255, 255)
+AZURE2 = RGB(224, 238, 238)
+AZURE3 = RGB(193, 205, 205)
+AZURE4 = RGB(131, 139, 139)
+BANANA = RGB(227, 207, 87)
+BEIGE = RGB(245, 245, 220)
+BISQUE1 = RGB(255, 228, 196)
+BISQUE2 = RGB(238, 213, 183)
+BISQUE3 = RGB(205, 183, 158)
+BISQUE4 = RGB(139, 125, 107)
+BLACK = RGB(0, 0, 0)
+BLANCHEDALMOND = RGB(255, 235, 205)
+BLUE = RGB(0, 0, 255)
+BLUE2 = RGB(0, 0, 238)
+BLUE3 = RGB(0, 0, 205)
+BLUE4 = RGB(0, 0, 139)
+BLUEVIOLET = RGB(138, 43, 226)
+BRICK = RGB(156, 102, 31)
+BROWN = RGB(165, 42, 42)
+BROWN1 = RGB(255, 64, 64)
+BROWN2 = RGB(238, 59, 59)
+BROWN3 = RGB(205, 51, 51)
+BROWN4 = RGB(139, 35, 35)
+BURLYWOOD = RGB(222, 184, 135)
+BURLYWOOD1 = RGB(255, 211, 155)
+BURLYWOOD2 = RGB(238, 197, 145)
+BURLYWOOD3 = RGB(205, 170, 125)
+BURLYWOOD4 = RGB(139, 115, 85)
+BURNTSIENNA = RGB(138, 54, 15)
+BURNTUMBER = RGB(138, 51, 36)
+CADETBLUE = RGB(95, 158, 160)
+CADETBLUE1 = RGB(152, 245, 255)
+CADETBLUE2 = RGB(142, 229, 238)
+CADETBLUE3 = RGB(122, 197, 205)
+CADETBLUE4 = RGB(83, 134, 139)
+CADMIUMORANGE = RGB(255, 97, 3)
+CADMIUMYELLOW = RGB(255, 153, 18)
+CARROT = RGB(237, 145, 33)
+CHARTREUSE1 = RGB(127, 255, 0)
+CHARTREUSE2 = RGB(118, 238, 0)
+CHARTREUSE3 = RGB(102, 205, 0)
+CHARTREUSE4 = RGB(69, 139, 0)
+CHOCOLATE = RGB(210, 105, 30)
+CHOCOLATE1 = RGB(255, 127, 36)
+CHOCOLATE2 = RGB(238, 118, 33)
+CHOCOLATE3 = RGB(205, 102, 29)
+CHOCOLATE4 = RGB(139, 69, 19)
+COBALT = RGB(61, 89, 171)
+COBALTGREEN = RGB(61, 145, 64)
+COLDGREY = RGB(128, 138, 135)
+CORAL = RGB(255, 127, 80)
+CORAL1 = RGB(255, 114, 86)
+CORAL2 = RGB(238, 106, 80)
+CORAL3 = RGB(205, 91, 69)
+CORAL4 = RGB(139, 62, 47)
+CORNFLOWERBLUE = RGB(100, 149, 237)
+CORNSILK1 = RGB(255, 248, 220)
+CORNSILK2 = RGB(238, 232, 205)
+CORNSILK3 = RGB(205, 200, 177)
+CORNSILK4 = RGB(139, 136, 120)
+CRIMSON = RGB(220, 20, 60)
+CYAN2 = RGB(0, 238, 238)
+CYAN3 = RGB(0, 205, 205)
+CYAN4 = RGB(0, 139, 139)
+DARKGOLDENROD = RGB(184, 134, 11)
+DARKGOLDENROD1 = RGB(255, 185, 15)
+DARKGOLDENROD2 = RGB(238, 173, 14)
+DARKGOLDENROD3 = RGB(205, 149, 12)
+DARKGOLDENROD4 = RGB(139, 101, 8)
+DARKGRAY = RGB(169, 169, 169)
+DARKGREEN = RGB(0, 100, 0)
+DARKKHAKI = RGB(189, 183, 107)
+DARKOLIVEGREEN = RGB(85, 107, 47)
+DARKOLIVEGREEN1 = RGB(202, 255, 112)
+DARKOLIVEGREEN2 = RGB(188, 238, 104)
+DARKOLIVEGREEN3 = RGB(162, 205, 90)
+DARKOLIVEGREEN4 = RGB(110, 139, 61)
+DARKORANGE = RGB(255, 140, 0)
+DARKORANGE1 = RGB(255, 127, 0)
+DARKORANGE2 = RGB(238, 118, 0)
+DARKORANGE3 = RGB(205, 102, 0)
+DARKORANGE4 = RGB(139, 69, 0)
+DARKORCHID = RGB(153, 50, 204)
+DARKORCHID1 = RGB(191, 62, 255)
+DARKORCHID2 = RGB(178, 58, 238)
+DARKORCHID3 = RGB(154, 50, 205)
+DARKORCHID4 = RGB(104, 34, 139)
+DARKSALMON = RGB(233, 150, 122)
+DARKSEAGREEN = RGB(143, 188, 143)
+DARKSEAGREEN1 = RGB(193, 255, 193)
+DARKSEAGREEN2 = RGB(180, 238, 180)
+DARKSEAGREEN3 = RGB(155, 205, 155)
+DARKSEAGREEN4 = RGB(105, 139, 105)
+DARKSLATEBLUE = RGB(72, 61, 139)
+DARKSLATEGRAY = RGB(47, 79, 79)
+DARKSLATEGRAY1 = RGB(151, 255, 255)
+DARKSLATEGRAY2 = RGB(141, 238, 238)
+DARKSLATEGRAY3 = RGB(121, 205, 205)
+DARKSLATEGRAY4 = RGB(82, 139, 139)
+DARKTURQUOISE = RGB(0, 206, 209)
+DARKVIOLET = RGB(148, 0, 211)
+DEEPPINK1 = RGB(255, 20, 147)
+DEEPPINK2 = RGB(238, 18, 137)
+DEEPPINK3 = RGB(205, 16, 118)
+DEEPPINK4 = RGB(139, 10, 80)
+DEEPSKYBLUE1 = RGB(0, 191, 255)
+DEEPSKYBLUE2 = RGB(0, 178, 238)
+DEEPSKYBLUE3 = RGB(0, 154, 205)
+DEEPSKYBLUE4 = RGB(0, 104, 139)
+DIMGRAY = RGB(105, 105, 105)
+DIMGRAY = RGB(105, 105, 105)
+DODGERBLUE1 = RGB(30, 144, 255)
+DODGERBLUE2 = RGB(28, 134, 238)
+DODGERBLUE3 = RGB(24, 116, 205)
+DODGERBLUE4 = RGB(16, 78, 139)
+EGGSHELL = RGB(252, 230, 201)
+EMERALDGREEN = RGB(0, 201, 87)
+FIREBRICK = RGB(178, 34, 34)
+FIREBRICK1 = RGB(255, 48, 48)
+FIREBRICK2 = RGB(238, 44, 44)
+FIREBRICK3 = RGB(205, 38, 38)
+FIREBRICK4 = RGB(139, 26, 26)
+FLESH = RGB(255, 125, 64)
+FLORALWHITE = RGB(255, 250, 240)
+FORESTGREEN = RGB(34, 139, 34)
+GAINSBORO = RGB(220, 220, 220)
+GHOSTWHITE = RGB(248, 248, 255)
+GOLD1 = RGB(255, 215, 0)
+GOLD2 = RGB(238, 201, 0)
+GOLD3 = RGB(205, 173, 0)
+GOLD4 = RGB(139, 117, 0)
+GOLDENROD = RGB(218, 165, 32)
+GOLDENROD1 = RGB(255, 193, 37)
+GOLDENROD2 = RGB(238, 180, 34)
+GOLDENROD3 = RGB(205, 155, 29)
+GOLDENROD4 = RGB(139, 105, 20)
+GRAY = RGB(128, 128, 128)
+GRAY1 = RGB(3, 3, 3)
+GRAY10 = RGB(26, 26, 26)
+GRAY11 = RGB(28, 28, 28)
+GRAY12 = RGB(31, 31, 31)
+GRAY13 = RGB(33, 33, 33)
+GRAY14 = RGB(36, 36, 36)
+GRAY15 = RGB(38, 38, 38)
+GRAY16 = RGB(41, 41, 41)
+GRAY17 = RGB(43, 43, 43)
+GRAY18 = RGB(46, 46, 46)
+GRAY19 = RGB(48, 48, 48)
+GRAY2 = RGB(5, 5, 5)
+GRAY20 = RGB(51, 51, 51)
+GRAY21 = RGB(54, 54, 54)
+GRAY22 = RGB(56, 56, 56)
+GRAY23 = RGB(59, 59, 59)
+GRAY24 = RGB(61, 61, 61)
+GRAY25 = RGB(64, 64, 64)
+GRAY26 = RGB(66, 66, 66)
+GRAY27 = RGB(69, 69, 69)
+GRAY28 = RGB(71, 71, 71)
+GRAY29 = RGB(74, 74, 74)
+GRAY3 = RGB(8, 8, 8)
+GRAY30 = RGB(77, 77, 77)
+GRAY31 = RGB(79, 79, 79)
+GRAY32 = RGB(82, 82, 82)
+GRAY33 = RGB(84, 84, 84)
+GRAY34 = RGB(87, 87, 87)
+GRAY35 = RGB(89, 89, 89)
+GRAY36 = RGB(92, 92, 92)
+GRAY37 = RGB(94, 94, 94)
+GRAY38 = RGB(97, 97, 97)
+GRAY39 = RGB(99, 99, 99)
+GRAY4 = RGB(10, 10, 10)
+GRAY40 = RGB(102, 102, 102)
+GRAY42 = RGB(107, 107, 107)
+GRAY43 = RGB(110, 110, 110)
+GRAY44 = RGB(112, 112, 112)
+GRAY45 = RGB(115, 115, 115)
+GRAY46 = RGB(117, 117, 117)
+GRAY47 = RGB(120, 120, 120)
+GRAY48 = RGB(122, 122, 122)
+GRAY49 = RGB(125, 125, 125)
+GRAY5 = RGB(13, 13, 13)
+GRAY50 = RGB(127, 127, 127)
+GRAY51 = RGB(130, 130, 130)
+GRAY52 = RGB(133, 133, 133)
+GRAY53 = RGB(135, 135, 135)
+GRAY54 = RGB(138, 138, 138)
+GRAY55 = RGB(140, 140, 140)
+GRAY56 = RGB(143, 143, 143)
+GRAY57 = RGB(145, 145, 145)
+GRAY58 = RGB(148, 148, 148)
+GRAY59 = RGB(150, 150, 150)
+GRAY6 = RGB(15, 15, 15)
+GRAY60 = RGB(153, 153, 153)
+GRAY61 = RGB(156, 156, 156)
+GRAY62 = RGB(158, 158, 158)
+GRAY63 = RGB(161, 161, 161)
+GRAY64 = RGB(163, 163, 163)
+GRAY65 = RGB(166, 166, 166)
+GRAY66 = RGB(168, 168, 168)
+GRAY67 = RGB(171, 171, 171)
+GRAY68 = RGB(173, 173, 173)
+GRAY69 = RGB(176, 176, 176)
+GRAY7 = RGB(18, 18, 18)
+GRAY70 = RGB(179, 179, 179)
+GRAY71 = RGB(181, 181, 181)
+GRAY72 = RGB(184, 184, 184)
+GRAY73 = RGB(186, 186, 186)
+GRAY74 = RGB(189, 189, 189)
+GRAY75 = RGB(191, 191, 191)
+GRAY76 = RGB(194, 194, 194)
+GRAY77 = RGB(196, 196, 196)
+GRAY78 = RGB(199, 199, 199)
+GRAY79 = RGB(201, 201, 201)
+GRAY8 = RGB(20, 20, 20)
+GRAY80 = RGB(204, 204, 204)
+GRAY81 = RGB(207, 207, 207)
+GRAY82 = RGB(209, 209, 209)
+GRAY83 = RGB(212, 212, 212)
+GRAY84 = RGB(214, 214, 214)
+GRAY85 = RGB(217, 217, 217)
+GRAY86 = RGB(219, 219, 219)
+GRAY87 = RGB(222, 222, 222)
+GRAY88 = RGB(224, 224, 224)
+GRAY89 = RGB(227, 227, 227)
+GRAY9 = RGB(23, 23, 23)
+GRAY90 = RGB(229, 229, 229)
+GRAY91 = RGB(232, 232, 232)
+GRAY92 = RGB(235, 235, 235)
+GRAY93 = RGB(237, 237, 237)
+GRAY94 = RGB(240, 240, 240)
+GRAY95 = RGB(242, 242, 242)
+GRAY97 = RGB(247, 247, 247)
+GRAY98 = RGB(250, 250, 250)
+GRAY99 = RGB(252, 252, 252)
+GREEN = RGB(0, 128, 0)
+GREEN1 = RGB(0, 255, 0)
+GREEN2 = RGB(0, 238, 0)
+GREEN3 = RGB(0, 205, 0)
+GREEN4 = RGB(0, 139, 0)
+GREENYELLOW = RGB(173, 255, 47)
+HONEYDEW1 = RGB(240, 255, 240)
+HONEYDEW2 = RGB(224, 238, 224)
+HONEYDEW3 = RGB(193, 205, 193)
+HONEYDEW4 = RGB(131, 139, 131)
+HOTPINK = RGB(255, 105, 180)
+HOTPINK1 = RGB(255, 110, 180)
+HOTPINK2 = RGB(238, 106, 167)
+HOTPINK3 = RGB(205, 96, 144)
+HOTPINK4 = RGB(139, 58, 98)
+INDIANRED = RGB(176, 23, 31)
+INDIANRED = RGB(205, 92, 92)
+INDIANRED1 = RGB(255, 106, 106)
+INDIANRED2 = RGB(238, 99, 99)
+INDIANRED3 = RGB(205, 85, 85)
+INDIANRED4 = RGB(139, 58, 58)
+INDIGO = RGB(75, 0, 130)
+IVORY1 = RGB(255, 255, 240)
+IVORY2 = RGB(238, 238, 224)
+IVORY3 = RGB(205, 205, 193)
+IVORY4 = RGB(139, 139, 131)
+IVORYBLACK = RGB(41, 36, 33)
+KHAKI = RGB(240, 230, 140)
+KHAKI1 = RGB(255, 246, 143)
+KHAKI2 = RGB(238, 230, 133)
+KHAKI3 = RGB(205, 198, 115)
+KHAKI4 = RGB(139, 134, 78)
+LAVENDER = RGB(230, 230, 250)
+LAVENDERBLUSH1 = RGB(255, 240, 245)
+LAVENDERBLUSH2 = RGB(238, 224, 229)
+LAVENDERBLUSH3 = RGB(205, 193, 197)
+LAVENDERBLUSH4 = RGB(139, 131, 134)
+LAWNGREEN = RGB(124, 252, 0)
+LEMONCHIFFON1 = RGB(255, 250, 205)
+LEMONCHIFFON2 = RGB(238, 233, 191)
+LEMONCHIFFON3 = RGB(205, 201, 165)
+LEMONCHIFFON4 = RGB(139, 137, 112)
+LIGHTBLUE = RGB(173, 216, 230)
+LIGHTBLUE1 = RGB(191, 239, 255)
+LIGHTBLUE2 = RGB(178, 223, 238)
+LIGHTBLUE3 = RGB(154, 192, 205)
+LIGHTBLUE4 = RGB(104, 131, 139)
+LIGHTCORAL = RGB(240, 128, 128)
+LIGHTCYAN1 = RGB(224, 255, 255)
+LIGHTCYAN2 = RGB(209, 238, 238)
+LIGHTCYAN3 = RGB(180, 205, 205)
+LIGHTCYAN4 = RGB(122, 139, 139)
+LIGHTGOLDENROD1 = RGB(255, 236, 139)
+LIGHTGOLDENROD2 = RGB(238, 220, 130)
+LIGHTGOLDENROD3 = RGB(205, 190, 112)
+LIGHTGOLDENROD4 = RGB(139, 129, 76)
+LIGHTGOLDENRODYELLOW = RGB(250, 250, 210)
+LIGHTGREY = RGB(211, 211, 211)
+LIGHTPINK = RGB(255, 182, 193)
+LIGHTPINK1 = RGB(255, 174, 185)
+LIGHTPINK2 = RGB(238, 162, 173)
+LIGHTPINK3 = RGB(205, 140, 149)
+LIGHTPINK4 = RGB(139, 95, 101)
+LIGHTSALMON1 = RGB(255, 160, 122)
+LIGHTSALMON2 = RGB(238, 149, 114)
+LIGHTSALMON3 = RGB(205, 129, 98)
+LIGHTSALMON4 = RGB(139, 87, 66)
+LIGHTSEAGREEN = RGB(32, 178, 170)
+LIGHTSKYBLUE = RGB(135, 206, 250)
+LIGHTSKYBLUE1 = RGB(176, 226, 255)
+LIGHTSKYBLUE2 = RGB(164, 211, 238)
+LIGHTSKYBLUE3 = RGB(141, 182, 205)
+LIGHTSKYBLUE4 = RGB(96, 123, 139)
+LIGHTSLATEBLUE = RGB(132, 112, 255)
+LIGHTSLATEGRAY = RGB(119, 136, 153)
+LIGHTSTEELBLUE = RGB(176, 196, 222)
+LIGHTSTEELBLUE1 = RGB(202, 225, 255)
+LIGHTSTEELBLUE2 = RGB(188, 210, 238)
+LIGHTSTEELBLUE3 = RGB(162, 181, 205)
+LIGHTSTEELBLUE4 = RGB(110, 123, 139)
+LIGHTYELLOW1 = RGB(255, 255, 224)
+LIGHTYELLOW2 = RGB(238, 238, 209)
+LIGHTYELLOW3 = RGB(205, 205, 180)
+LIGHTYELLOW4 = RGB(139, 139, 122)
+LIMEGREEN = RGB(50, 205, 50)
+LINEN = RGB(250, 240, 230)
+MAGENTA = RGB(255, 0, 255)
+MAGENTA2 = RGB(238, 0, 238)
+MAGENTA3 = RGB(205, 0, 205)
+MAGENTA4 = RGB(139, 0, 139)
+MANGANESEBLUE = RGB(3, 168, 158)
+MAROON = RGB(128, 0, 0)
+MAROON1 = RGB(255, 52, 179)
+MAROON2 = RGB(238, 48, 167)
+MAROON3 = RGB(205, 41, 144)
+MAROON4 = RGB(139, 28, 98)
+MEDIUMORCHID = RGB(186, 85, 211)
+MEDIUMORCHID1 = RGB(224, 102, 255)
+MEDIUMORCHID2 = RGB(209, 95, 238)
+MEDIUMORCHID3 = RGB(180, 82, 205)
+MEDIUMORCHID4 = RGB(122, 55, 139)
+MEDIUMPURPLE = RGB(147, 112, 219)
+MEDIUMPURPLE1 = RGB(171, 130, 255)
+MEDIUMPURPLE2 = RGB(159, 121, 238)
+MEDIUMPURPLE3 = RGB(137, 104, 205)
+MEDIUMPURPLE4 = RGB(93, 71, 139)
+MEDIUMSEAGREEN = RGB(60, 179, 113)
+MEDIUMSLATEBLUE = RGB(123, 104, 238)
+MEDIUMSPRINGGREEN = RGB(0, 250, 154)
+MEDIUMTURQUOISE = RGB(72, 209, 204)
+MEDIUMVIOLETRED = RGB(199, 21, 133)
+MELON = RGB(227, 168, 105)
+MIDNIGHTBLUE = RGB(25, 25, 112)
+MINT = RGB(189, 252, 201)
+MINTCREAM = RGB(245, 255, 250)
+MISTYROSE1 = RGB(255, 228, 225)
+MISTYROSE2 = RGB(238, 213, 210)
+MISTYROSE3 = RGB(205, 183, 181)
+MISTYROSE4 = RGB(139, 125, 123)
+MOCCASIN = RGB(255, 228, 181)
+NAVAJOWHITE1 = RGB(255, 222, 173)
+NAVAJOWHITE2 = RGB(238, 207, 161)
+NAVAJOWHITE3 = RGB(205, 179, 139)
+NAVAJOWHITE4 = RGB(139, 121, 94)
+NAVY = RGB(0, 0, 128)
+OLDLACE = RGB(253, 245, 230)
+OLIVE = RGB(128, 128, 0)
+OLIVEDRAB = RGB(107, 142, 35)
+OLIVEDRAB1 = RGB(192, 255, 62)
+OLIVEDRAB2 = RGB(179, 238, 58)
+OLIVEDRAB3 = RGB(154, 205, 50)
+OLIVEDRAB4 = RGB(105, 139, 34)
+ORANGE = RGB(255, 128, 0)
+ORANGE1 = RGB(255, 165, 0)
+ORANGE2 = RGB(238, 154, 0)
+ORANGE3 = RGB(205, 133, 0)
+ORANGE4 = RGB(139, 90, 0)
+ORANGERED1 = RGB(255, 69, 0)
+ORANGERED2 = RGB(238, 64, 0)
+ORANGERED3 = RGB(205, 55, 0)
+ORANGERED4 = RGB(139, 37, 0)
+ORCHID = RGB(218, 112, 214)
+ORCHID1 = RGB(255, 131, 250)
+ORCHID2 = RGB(238, 122, 233)
+ORCHID3 = RGB(205, 105, 201)
+ORCHID4 = RGB(139, 71, 137)
+PALEGOLDENROD = RGB(238, 232, 170)
+PALEGREEN = RGB(152, 251, 152)
+PALEGREEN1 = RGB(154, 255, 154)
+PALEGREEN2 = RGB(144, 238, 144)
+PALEGREEN3 = RGB(124, 205, 124)
+PALEGREEN4 = RGB(84, 139, 84)
+PALETURQUOISE1 = RGB(187, 255, 255)
+PALETURQUOISE2 = RGB(174, 238, 238)
+PALETURQUOISE3 = RGB(150, 205, 205)
+PALETURQUOISE4 = RGB(102, 139, 139)
+PALEVIOLETRED = RGB(219, 112, 147)
+PALEVIOLETRED1 = RGB(255, 130, 171)
+PALEVIOLETRED2 = RGB(238, 121, 159)
+PALEVIOLETRED3 = RGB(205, 104, 137)
+PALEVIOLETRED4 = RGB(139, 71, 93)
+PAPAYAWHIP = RGB(255, 239, 213)
+PEACHPUFF1 = RGB(255, 218, 185)
+PEACHPUFF2 = RGB(238, 203, 173)
+PEACHPUFF3 = RGB(205, 175, 149)
+PEACHPUFF4 = RGB(139, 119, 101)
+PEACOCK = RGB(51, 161, 201)
+PINK = RGB(255, 192, 203)
+PINK1 = RGB(255, 181, 197)
+PINK2 = RGB(238, 169, 184)
+PINK3 = RGB(205, 145, 158)
+PINK4 = RGB(139, 99, 108)
+PLUM = RGB(221, 160, 221)
+PLUM1 = RGB(255, 187, 255)
+PLUM2 = RGB(238, 174, 238)
+PLUM3 = RGB(205, 150, 205)
+PLUM4 = RGB(139, 102, 139)
+POWDERBLUE = RGB(176, 224, 230)
+PURPLE = RGB(128, 0, 128)
+PURPLE1 = RGB(155, 48, 255)
+PURPLE2 = RGB(145, 44, 238)
+PURPLE3 = RGB(125, 38, 205)
+PURPLE4 = RGB(85, 26, 139)
+RASPBERRY = RGB(135, 38, 87)
+RAWSIENNA = RGB(199, 97, 20)
+RED1 = RGB(255, 0, 0)
+RED2 = RGB(238, 0, 0)
+RED3 = RGB(205, 0, 0)
+RED4 = RGB(139, 0, 0)
+ROSYBROWN = RGB(188, 143, 143)
+ROSYBROWN1 = RGB(255, 193, 193)
+ROSYBROWN2 = RGB(238, 180, 180)
+ROSYBROWN3 = RGB(205, 155, 155)
+ROSYBROWN4 = RGB(139, 105, 105)
+ROYALBLUE = RGB(65, 105, 225)
+ROYALBLUE1 = RGB(72, 118, 255)
+ROYALBLUE2 = RGB(67, 110, 238)
+ROYALBLUE3 = RGB(58, 95, 205)
+ROYALBLUE4 = RGB(39, 64, 139)
+SALMON = RGB(250, 128, 114)
+SALMON1 = RGB(255, 140, 105)
+SALMON2 = RGB(238, 130, 98)
+SALMON3 = RGB(205, 112, 84)
+SALMON4 = RGB(139, 76, 57)
+SANDYBROWN = RGB(244, 164, 96)
+SAPGREEN = RGB(48, 128, 20)
+SEAGREEN1 = RGB(84, 255, 159)
+SEAGREEN2 = RGB(78, 238, 148)
+SEAGREEN3 = RGB(67, 205, 128)
+SEAGREEN4 = RGB(46, 139, 87)
+SEASHELL1 = RGB(255, 245, 238)
+SEASHELL2 = RGB(238, 229, 222)
+SEASHELL3 = RGB(205, 197, 191)
+SEASHELL4 = RGB(139, 134, 130)
+SEPIA = RGB(94, 38, 18)
+SGIBEET = RGB(142, 56, 142)
+SGIBRIGHTGRAY = RGB(197, 193, 170)
+SGICHARTREUSE = RGB(113, 198, 113)
+SGIDARKGRAY = RGB(85, 85, 85)
+SGIGRAY12 = RGB(30, 30, 30)
+SGIGRAY16 = RGB(40, 40, 40)
+SGIGRAY32 = RGB(81, 81, 81)
+SGIGRAY36 = RGB(91, 91, 91)
+SGIGRAY52 = RGB(132, 132, 132)
+SGIGRAY56 = RGB(142, 142, 142)
+SGIGRAY72 = RGB(183, 183, 183)
+SGIGRAY76 = RGB(193, 193, 193)
+SGIGRAY92 = RGB(234, 234, 234)
+SGIGRAY96 = RGB(244, 244, 244)
+SGILIGHTBLUE = RGB(125, 158, 192)
+SGILIGHTGRAY = RGB(170, 170, 170)
+SGIOLIVEDRAB = RGB(142, 142, 56)
+SGISALMON = RGB(198, 113, 113)
+SGISLATEBLUE = RGB(113, 113, 198)
+SGITEAL = RGB(56, 142, 142)
+SIENNA = RGB(160, 82, 45)
+SIENNA1 = RGB(255, 130, 71)
+SIENNA2 = RGB(238, 121, 66)
+SIENNA3 = RGB(205, 104, 57)
+SIENNA4 = RGB(139, 71, 38)
+SILVER = RGB(192, 192, 192)
+SKYBLUE = RGB(135, 206, 235)
+SKYBLUE1 = RGB(135, 206, 255)
+SKYBLUE2 = RGB(126, 192, 238)
+SKYBLUE3 = RGB(108, 166, 205)
+SKYBLUE4 = RGB(74, 112, 139)
+SLATEBLUE = RGB(106, 90, 205)
+SLATEBLUE1 = RGB(131, 111, 255)
+SLATEBLUE2 = RGB(122, 103, 238)
+SLATEBLUE3 = RGB(105, 89, 205)
+SLATEBLUE4 = RGB(71, 60, 139)
+SLATEGRAY = RGB(112, 128, 144)
+SLATEGRAY1 = RGB(198, 226, 255)
+SLATEGRAY2 = RGB(185, 211, 238)
+SLATEGRAY3 = RGB(159, 182, 205)
+SLATEGRAY4 = RGB(108, 123, 139)
+SNOW1 = RGB(255, 250, 250)
+SNOW2 = RGB(238, 233, 233)
+SNOW3 = RGB(205, 201, 201)
+SNOW4 = RGB(139, 137, 137)
+SPRINGGREEN = RGB(0, 255, 127)
+SPRINGGREEN1 = RGB(0, 238, 118)
+SPRINGGREEN2 = RGB(0, 205, 102)
+SPRINGGREEN3 = RGB(0, 139, 69)
+STEELBLUE = RGB(70, 130, 180)
+STEELBLUE1 = RGB(99, 184, 255)
+STEELBLUE2 = RGB(92, 172, 238)
+STEELBLUE3 = RGB(79, 148, 205)
+STEELBLUE4 = RGB(54, 100, 139)
+TAN = RGB(210, 180, 140)
+TAN1 = RGB(255, 165, 79)
+TAN2 = RGB(238, 154, 73)
+TAN3 = RGB(205, 133, 63)
+TAN4 = RGB(139, 90, 43)
+TEAL = RGB(0, 128, 128)
+THISTLE = RGB(216, 191, 216)
+THISTLE1 = RGB(255, 225, 255)
+THISTLE2 = RGB(238, 210, 238)
+THISTLE3 = RGB(205, 181, 205)
+THISTLE4 = RGB(139, 123, 139)
+TOMATO1 = RGB(255, 99, 71)
+TOMATO2 = RGB(238, 92, 66)
+TOMATO3 = RGB(205, 79, 57)
+TOMATO4 = RGB(139, 54, 38)
+TURQUOISE = RGB(64, 224, 208)
+TURQUOISE1 = RGB(0, 245, 255)
+TURQUOISE2 = RGB(0, 229, 238)
+TURQUOISE3 = RGB(0, 197, 205)
+TURQUOISE4 = RGB(0, 134, 139)
+TURQUOISEBLUE = RGB(0, 199, 140)
+VIOLET = RGB(238, 130, 238)
+VIOLETRED = RGB(208, 32, 144)
+VIOLETRED1 = RGB(255, 62, 150)
+VIOLETRED2 = RGB(238, 58, 140)
+VIOLETRED3 = RGB(205, 50, 120)
+VIOLETRED4 = RGB(139, 34, 82)
+WARMGREY = RGB(128, 128, 105)
+WHEAT = RGB(245, 222, 179)
+WHEAT1 = RGB(255, 231, 186)
+WHEAT2 = RGB(238, 216, 174)
+WHEAT3 = RGB(205, 186, 150)
+WHEAT4 = RGB(139, 126, 102)
+WHITE = RGB(255, 255, 255)
+WHITESMOKE = RGB(245, 245, 245)
+WHITESMOKE = RGB(245, 245, 245)
+YELLOW1 = RGB(255, 255, 0)
+YELLOW2 = RGB(238, 238, 0)
+YELLOW3 = RGB(205, 205, 0)
+YELLOW4 = RGB(139, 139, 0)
+
+#Add colors to colors dictionary
+colors['aliceblue'] = ALICEBLUE
+colors['antiquewhite'] = ANTIQUEWHITE
+colors['antiquewhite1'] = ANTIQUEWHITE1
+colors['antiquewhite2'] = ANTIQUEWHITE2
+colors['antiquewhite3'] = ANTIQUEWHITE3
+colors['antiquewhite4'] = ANTIQUEWHITE4
+colors['aqua'] = AQUA
+colors['aquamarine1'] = AQUAMARINE1
+colors['aquamarine2'] = AQUAMARINE2
+colors['aquamarine3'] = AQUAMARINE3
+colors['aquamarine4'] = AQUAMARINE4
+colors['azure1'] = AZURE1
+colors['azure2'] = AZURE2
+colors['azure3'] = AZURE3
+colors['azure4'] = AZURE4
+colors['banana'] = BANANA
+colors['beige'] = BEIGE
+colors['bisque1'] = BISQUE1
+colors['bisque2'] = BISQUE2
+colors['bisque3'] = BISQUE3
+colors['bisque4'] = BISQUE4
+colors['black'] = BLACK
+colors['blanchedalmond'] = BLANCHEDALMOND
+colors['blue'] = BLUE
+colors['blue2'] = BLUE2
+colors['blue3'] = BLUE3
+colors['blue4'] = BLUE4
+colors['blueviolet'] = BLUEVIOLET
+colors['brick'] = BRICK
+colors['brown'] = BROWN
+colors['brown1'] = BROWN1
+colors['brown2'] = BROWN2
+colors['brown3'] = BROWN3
+colors['brown4'] = BROWN4
+colors['burlywood'] = BURLYWOOD
+colors['burlywood1'] = BURLYWOOD1
+colors['burlywood2'] = BURLYWOOD2
+colors['burlywood3'] = BURLYWOOD3
+colors['burlywood4'] = BURLYWOOD4
+colors['burntsienna'] = BURNTSIENNA
+colors['burntumber'] = BURNTUMBER
+colors['cadetblue'] = CADETBLUE
+colors['cadetblue1'] = CADETBLUE1
+colors['cadetblue2'] = CADETBLUE2
+colors['cadetblue3'] = CADETBLUE3
+colors['cadetblue4'] = CADETBLUE4
+colors['cadmiumorange'] = CADMIUMORANGE
+colors['cadmiumyellow'] = CADMIUMYELLOW
+colors['carrot'] = CARROT
+colors['chartreuse1'] = CHARTREUSE1
+colors['chartreuse2'] = CHARTREUSE2
+colors['chartreuse3'] = CHARTREUSE3
+colors['chartreuse4'] = CHARTREUSE4
+colors['chocolate'] = CHOCOLATE
+colors['chocolate1'] = CHOCOLATE1
+colors['chocolate2'] = CHOCOLATE2
+colors['chocolate3'] = CHOCOLATE3
+colors['chocolate4'] = CHOCOLATE4
+colors['cobalt'] = COBALT
+colors['cobaltgreen'] = COBALTGREEN
+colors['coldgrey'] = COLDGREY
+colors['coral'] = CORAL
+colors['coral1'] = CORAL1
+colors['coral2'] = CORAL2
+colors['coral3'] = CORAL3
+colors['coral4'] = CORAL4
+colors['cornflowerblue'] = CORNFLOWERBLUE
+colors['cornsilk1'] = CORNSILK1
+colors['cornsilk2'] = CORNSILK2
+colors['cornsilk3'] = CORNSILK3
+colors['cornsilk4'] = CORNSILK4
+colors['crimson'] = CRIMSON
+colors['cyan2'] = CYAN2
+colors['cyan3'] = CYAN3
+colors['cyan4'] = CYAN4
+colors['darkgoldenrod'] = DARKGOLDENROD
+colors['darkgoldenrod1'] = DARKGOLDENROD1
+colors['darkgoldenrod2'] = DARKGOLDENROD2
+colors['darkgoldenrod3'] = DARKGOLDENROD3
+colors['darkgoldenrod4'] = DARKGOLDENROD4
+colors['darkgray'] = DARKGRAY
+colors['darkgreen'] = DARKGREEN
+colors['darkkhaki'] = DARKKHAKI
+colors['darkolivegreen'] = DARKOLIVEGREEN
+colors['darkolivegreen1'] = DARKOLIVEGREEN1
+colors['darkolivegreen2'] = DARKOLIVEGREEN2
+colors['darkolivegreen3'] = DARKOLIVEGREEN3
+colors['darkolivegreen4'] = DARKOLIVEGREEN4
+colors['darkorange'] = DARKORANGE
+colors['darkorange1'] = DARKORANGE1
+colors['darkorange2'] = DARKORANGE2
+colors['darkorange3'] = DARKORANGE3
+colors['darkorange4'] = DARKORANGE4
+colors['darkorchid'] = DARKORCHID
+colors['darkorchid1'] = DARKORCHID1
+colors['darkorchid2'] = DARKORCHID2
+colors['darkorchid3'] = DARKORCHID3
+colors['darkorchid4'] = DARKORCHID4
+colors['darksalmon'] = DARKSALMON
+colors['darkseagreen'] = DARKSEAGREEN
+colors['darkseagreen1'] = DARKSEAGREEN1
+colors['darkseagreen2'] = DARKSEAGREEN2
+colors['darkseagreen3'] = DARKSEAGREEN3
+colors['darkseagreen4'] = DARKSEAGREEN4
+colors['darkslateblue'] = DARKSLATEBLUE
+colors['darkslategray'] = DARKSLATEGRAY
+colors['darkslategray1'] = DARKSLATEGRAY1
+colors['darkslategray2'] = DARKSLATEGRAY2
+colors['darkslategray3'] = DARKSLATEGRAY3
+colors['darkslategray4'] = DARKSLATEGRAY4
+colors['darkturquoise'] = DARKTURQUOISE
+colors['darkviolet'] = DARKVIOLET
+colors['deeppink1'] = DEEPPINK1
+colors['deeppink2'] = DEEPPINK2
+colors['deeppink3'] = DEEPPINK3
+colors['deeppink4'] = DEEPPINK4
+colors['deepskyblue1'] = DEEPSKYBLUE1
+colors['deepskyblue2'] = DEEPSKYBLUE2
+colors['deepskyblue3'] = DEEPSKYBLUE3
+colors['deepskyblue4'] = DEEPSKYBLUE4
+colors['dimgray'] = DIMGRAY
+colors['dimgray'] = DIMGRAY
+colors['dodgerblue1'] = DODGERBLUE1
+colors['dodgerblue2'] = DODGERBLUE2
+colors['dodgerblue3'] = DODGERBLUE3
+colors['dodgerblue4'] = DODGERBLUE4
+colors['eggshell'] = EGGSHELL
+colors['emeraldgreen'] = EMERALDGREEN
+colors['firebrick'] = FIREBRICK
+colors['firebrick1'] = FIREBRICK1
+colors['firebrick2'] = FIREBRICK2
+colors['firebrick3'] = FIREBRICK3
+colors['firebrick4'] = FIREBRICK4
+colors['flesh'] = FLESH
+colors['floralwhite'] = FLORALWHITE
+colors['forestgreen'] = FORESTGREEN
+colors['gainsboro'] = GAINSBORO
+colors['ghostwhite'] = GHOSTWHITE
+colors['gold1'] = GOLD1
+colors['gold2'] = GOLD2
+colors['gold3'] = GOLD3
+colors['gold4'] = GOLD4
+colors['goldenrod'] = GOLDENROD
+colors['goldenrod1'] = GOLDENROD1
+colors['goldenrod2'] = GOLDENROD2
+colors['goldenrod3'] = GOLDENROD3
+colors['goldenrod4'] = GOLDENROD4
+colors['gray'] = GRAY
+colors['gray1'] = GRAY1
+colors['gray10'] = GRAY10
+colors['gray11'] = GRAY11
+colors['gray12'] = GRAY12
+colors['gray13'] = GRAY13
+colors['gray14'] = GRAY14
+colors['gray15'] = GRAY15
+colors['gray16'] = GRAY16
+colors['gray17'] = GRAY17
+colors['gray18'] = GRAY18
+colors['gray19'] = GRAY19
+colors['gray2'] = GRAY2
+colors['gray20'] = GRAY20
+colors['gray21'] = GRAY21
+colors['gray22'] = GRAY22
+colors['gray23'] = GRAY23
+colors['gray24'] = GRAY24
+colors['gray25'] = GRAY25
+colors['gray26'] = GRAY26
+colors['gray27'] = GRAY27
+colors['gray28'] = GRAY28
+colors['gray29'] = GRAY29
+colors['gray3'] = GRAY3
+colors['gray30'] = GRAY30
+colors['gray31'] = GRAY31
+colors['gray32'] = GRAY32
+colors['gray33'] = GRAY33
+colors['gray34'] = GRAY34
+colors['gray35'] = GRAY35
+colors['gray36'] = GRAY36
+colors['gray37'] = GRAY37
+colors['gray38'] = GRAY38
+colors['gray39'] = GRAY39
+colors['gray4'] = GRAY4
+colors['gray40'] = GRAY40
+colors['gray42'] = GRAY42
+colors['gray43'] = GRAY43
+colors['gray44'] = GRAY44
+colors['gray45'] = GRAY45
+colors['gray46'] = GRAY46
+colors['gray47'] = GRAY47
+colors['gray48'] = GRAY48
+colors['gray49'] = GRAY49
+colors['gray5'] = GRAY5
+colors['gray50'] = GRAY50
+colors['gray51'] = GRAY51
+colors['gray52'] = GRAY52
+colors['gray53'] = GRAY53
+colors['gray54'] = GRAY54
+colors['gray55'] = GRAY55
+colors['gray56'] = GRAY56
+colors['gray57'] = GRAY57
+colors['gray58'] = GRAY58
+colors['gray59'] = GRAY59
+colors['gray6'] = GRAY6
+colors['gray60'] = GRAY60
+colors['gray61'] = GRAY61
+colors['gray62'] = GRAY62
+colors['gray63'] = GRAY63
+colors['gray64'] = GRAY64
+colors['gray65'] = GRAY65
+colors['gray66'] = GRAY66
+colors['gray67'] = GRAY67
+colors['gray68'] = GRAY68
+colors['gray69'] = GRAY69
+colors['gray7'] = GRAY7
+colors['gray70'] = GRAY70
+colors['gray71'] = GRAY71
+colors['gray72'] = GRAY72
+colors['gray73'] = GRAY73
+colors['gray74'] = GRAY74
+colors['gray75'] = GRAY75
+colors['gray76'] = GRAY76
+colors['gray77'] = GRAY77
+colors['gray78'] = GRAY78
+colors['gray79'] = GRAY79
+colors['gray8'] = GRAY8
+colors['gray80'] = GRAY80
+colors['gray81'] = GRAY81
+colors['gray82'] = GRAY82
+colors['gray83'] = GRAY83
+colors['gray84'] = GRAY84
+colors['gray85'] = GRAY85
+colors['gray86'] = GRAY86
+colors['gray87'] = GRAY87
+colors['gray88'] = GRAY88
+colors['gray89'] = GRAY89
+colors['gray9'] = GRAY9
+colors['gray90'] = GRAY90
+colors['gray91'] = GRAY91
+colors['gray92'] = GRAY92
+colors['gray93'] = GRAY93
+colors['gray94'] = GRAY94
+colors['gray95'] = GRAY95
+colors['gray97'] = GRAY97
+colors['gray98'] = GRAY98
+colors['gray99'] = GRAY99
+colors['green'] = GREEN
+colors['green1'] = GREEN1
+colors['green2'] = GREEN2
+colors['green3'] = GREEN3
+colors['green4'] = GREEN4
+colors['greenyellow'] = GREENYELLOW
+colors['honeydew1'] = HONEYDEW1
+colors['honeydew2'] = HONEYDEW2
+colors['honeydew3'] = HONEYDEW3
+colors['honeydew4'] = HONEYDEW4
+colors['hotpink'] = HOTPINK
+colors['hotpink1'] = HOTPINK1
+colors['hotpink2'] = HOTPINK2
+colors['hotpink3'] = HOTPINK3
+colors['hotpink4'] = HOTPINK4
+colors['indianred'] = INDIANRED
+colors['indianred'] = INDIANRED
+colors['indianred1'] = INDIANRED1
+colors['indianred2'] = INDIANRED2
+colors['indianred3'] = INDIANRED3
+colors['indianred4'] = INDIANRED4
+colors['indigo'] = INDIGO
+colors['ivory1'] = IVORY1
+colors['ivory2'] = IVORY2
+colors['ivory3'] = IVORY3
+colors['ivory4'] = IVORY4
+colors['ivoryblack'] = IVORYBLACK
+colors['khaki'] = KHAKI
+colors['khaki1'] = KHAKI1
+colors['khaki2'] = KHAKI2
+colors['khaki3'] = KHAKI3
+colors['khaki4'] = KHAKI4
+colors['lavender'] = LAVENDER
+colors['lavenderblush1'] = LAVENDERBLUSH1
+colors['lavenderblush2'] = LAVENDERBLUSH2
+colors['lavenderblush3'] = LAVENDERBLUSH3
+colors['lavenderblush4'] = LAVENDERBLUSH4
+colors['lawngreen'] = LAWNGREEN
+colors['lemonchiffon1'] = LEMONCHIFFON1
+colors['lemonchiffon2'] = LEMONCHIFFON2
+colors['lemonchiffon3'] = LEMONCHIFFON3
+colors['lemonchiffon4'] = LEMONCHIFFON4
+colors['lightblue'] = LIGHTBLUE
+colors['lightblue1'] = LIGHTBLUE1
+colors['lightblue2'] = LIGHTBLUE2
+colors['lightblue3'] = LIGHTBLUE3
+colors['lightblue4'] = LIGHTBLUE4
+colors['lightcoral'] = LIGHTCORAL
+colors['lightcyan1'] = LIGHTCYAN1
+colors['lightcyan2'] = LIGHTCYAN2
+colors['lightcyan3'] = LIGHTCYAN3
+colors['lightcyan4'] = LIGHTCYAN4
+colors['lightgoldenrod1'] = LIGHTGOLDENROD1
+colors['lightgoldenrod2'] = LIGHTGOLDENROD2
+colors['lightgoldenrod3'] = LIGHTGOLDENROD3
+colors['lightgoldenrod4'] = LIGHTGOLDENROD4
+colors['lightgoldenrodyellow'] = LIGHTGOLDENRODYELLOW
+colors['lightgrey'] = LIGHTGREY
+colors['lightpink'] = LIGHTPINK
+colors['lightpink1'] = LIGHTPINK1
+colors['lightpink2'] = LIGHTPINK2
+colors['lightpink3'] = LIGHTPINK3
+colors['lightpink4'] = LIGHTPINK4
+colors['lightsalmon1'] = LIGHTSALMON1
+colors['lightsalmon2'] = LIGHTSALMON2
+colors['lightsalmon3'] = LIGHTSALMON3
+colors['lightsalmon4'] = LIGHTSALMON4
+colors['lightseagreen'] = LIGHTSEAGREEN
+colors['lightskyblue'] = LIGHTSKYBLUE
+colors['lightskyblue1'] = LIGHTSKYBLUE1
+colors['lightskyblue2'] = LIGHTSKYBLUE2
+colors['lightskyblue3'] = LIGHTSKYBLUE3
+colors['lightskyblue4'] = LIGHTSKYBLUE4
+colors['lightslateblue'] = LIGHTSLATEBLUE
+colors['lightslategray'] = LIGHTSLATEGRAY
+colors['lightsteelblue'] = LIGHTSTEELBLUE
+colors['lightsteelblue1'] = LIGHTSTEELBLUE1
+colors['lightsteelblue2'] = LIGHTSTEELBLUE2
+colors['lightsteelblue3'] = LIGHTSTEELBLUE3
+colors['lightsteelblue4'] = LIGHTSTEELBLUE4
+colors['lightyellow1'] = LIGHTYELLOW1
+colors['lightyellow2'] = LIGHTYELLOW2
+colors['lightyellow3'] = LIGHTYELLOW3
+colors['lightyellow4'] = LIGHTYELLOW4
+colors['limegreen'] = LIMEGREEN
+colors['linen'] = LINEN
+colors['magenta'] = MAGENTA
+colors['magenta2'] = MAGENTA2
+colors['magenta3'] = MAGENTA3
+colors['magenta4'] = MAGENTA4
+colors['manganeseblue'] = MANGANESEBLUE
+colors['maroon'] = MAROON
+colors['maroon1'] = MAROON1
+colors['maroon2'] = MAROON2
+colors['maroon3'] = MAROON3
+colors['maroon4'] = MAROON4
+colors['mediumorchid'] = MEDIUMORCHID
+colors['mediumorchid1'] = MEDIUMORCHID1
+colors['mediumorchid2'] = MEDIUMORCHID2
+colors['mediumorchid3'] = MEDIUMORCHID3
+colors['mediumorchid4'] = MEDIUMORCHID4
+colors['mediumpurple'] = MEDIUMPURPLE
+colors['mediumpurple1'] = MEDIUMPURPLE1
+colors['mediumpurple2'] = MEDIUMPURPLE2
+colors['mediumpurple3'] = MEDIUMPURPLE3
+colors['mediumpurple4'] = MEDIUMPURPLE4
+colors['mediumseagreen'] = MEDIUMSEAGREEN
+colors['mediumslateblue'] = MEDIUMSLATEBLUE
+colors['mediumspringgreen'] = MEDIUMSPRINGGREEN
+colors['mediumturquoise'] = MEDIUMTURQUOISE
+colors['mediumvioletred'] = MEDIUMVIOLETRED
+colors['melon'] = MELON
+colors['midnightblue'] = MIDNIGHTBLUE
+colors['mint'] = MINT
+colors['mintcream'] = MINTCREAM
+colors['mistyrose1'] = MISTYROSE1
+colors['mistyrose2'] = MISTYROSE2
+colors['mistyrose3'] = MISTYROSE3
+colors['mistyrose4'] = MISTYROSE4
+colors['moccasin'] = MOCCASIN
+colors['navajowhite1'] = NAVAJOWHITE1
+colors['navajowhite2'] = NAVAJOWHITE2
+colors['navajowhite3'] = NAVAJOWHITE3
+colors['navajowhite4'] = NAVAJOWHITE4
+colors['navy'] = NAVY
+colors['oldlace'] = OLDLACE
+colors['olive'] = OLIVE
+colors['olivedrab'] = OLIVEDRAB
+colors['olivedrab1'] = OLIVEDRAB1
+colors['olivedrab2'] = OLIVEDRAB2
+colors['olivedrab3'] = OLIVEDRAB3
+colors['olivedrab4'] = OLIVEDRAB4
+colors['orange'] = ORANGE
+colors['orange1'] = ORANGE1
+colors['orange2'] = ORANGE2
+colors['orange3'] = ORANGE3
+colors['orange4'] = ORANGE4
+colors['orangered1'] = ORANGERED1
+colors['orangered2'] = ORANGERED2
+colors['orangered3'] = ORANGERED3
+colors['orangered4'] = ORANGERED4
+colors['orchid'] = ORCHID
+colors['orchid1'] = ORCHID1
+colors['orchid2'] = ORCHID2
+colors['orchid3'] = ORCHID3
+colors['orchid4'] = ORCHID4
+colors['palegoldenrod'] = PALEGOLDENROD
+colors['palegreen'] = PALEGREEN
+colors['palegreen1'] = PALEGREEN1
+colors['palegreen2'] = PALEGREEN2
+colors['palegreen3'] = PALEGREEN3
+colors['palegreen4'] = PALEGREEN4
+colors['paleturquoise1'] = PALETURQUOISE1
+colors['paleturquoise2'] = PALETURQUOISE2
+colors['paleturquoise3'] = PALETURQUOISE3
+colors['paleturquoise4'] = PALETURQUOISE4
+colors['palevioletred'] = PALEVIOLETRED
+colors['palevioletred1'] = PALEVIOLETRED1
+colors['palevioletred2'] = PALEVIOLETRED2
+colors['palevioletred3'] = PALEVIOLETRED3
+colors['palevioletred4'] = PALEVIOLETRED4
+colors['papayawhip'] = PAPAYAWHIP
+colors['peachpuff1'] = PEACHPUFF1
+colors['peachpuff2'] = PEACHPUFF2
+colors['peachpuff3'] = PEACHPUFF3
+colors['peachpuff4'] = PEACHPUFF4
+colors['peacock'] = PEACOCK
+colors['pink'] = PINK
+colors['pink1'] = PINK1
+colors['pink2'] = PINK2
+colors['pink3'] = PINK3
+colors['pink4'] = PINK4
+colors['plum'] = PLUM
+colors['plum1'] = PLUM1
+colors['plum2'] = PLUM2
+colors['plum3'] = PLUM3
+colors['plum4'] = PLUM4
+colors['powderblue'] = POWDERBLUE
+colors['purple'] = PURPLE
+colors['purple1'] = PURPLE1
+colors['purple2'] = PURPLE2
+colors['purple3'] = PURPLE3
+colors['purple4'] = PURPLE4
+colors['raspberry'] = RASPBERRY
+colors['rawsienna'] = RAWSIENNA
+colors['red1'] = RED1
+colors['red2'] = RED2
+colors['red3'] = RED3
+colors['red4'] = RED4
+colors['rosybrown'] = ROSYBROWN
+colors['rosybrown1'] = ROSYBROWN1
+colors['rosybrown2'] = ROSYBROWN2
+colors['rosybrown3'] = ROSYBROWN3
+colors['rosybrown4'] = ROSYBROWN4
+colors['royalblue'] = ROYALBLUE
+colors['royalblue1'] = ROYALBLUE1
+colors['royalblue2'] = ROYALBLUE2
+colors['royalblue3'] = ROYALBLUE3
+colors['royalblue4'] = ROYALBLUE4
+colors['salmon'] = SALMON
+colors['salmon1'] = SALMON1
+colors['salmon2'] = SALMON2
+colors['salmon3'] = SALMON3
+colors['salmon4'] = SALMON4
+colors['sandybrown'] = SANDYBROWN
+colors['sapgreen'] = SAPGREEN
+colors['seagreen1'] = SEAGREEN1
+colors['seagreen2'] = SEAGREEN2
+colors['seagreen3'] = SEAGREEN3
+colors['seagreen4'] = SEAGREEN4
+colors['seashell1'] = SEASHELL1
+colors['seashell2'] = SEASHELL2
+colors['seashell3'] = SEASHELL3
+colors['seashell4'] = SEASHELL4
+colors['sepia'] = SEPIA
+colors['sgibeet'] = SGIBEET
+colors['sgibrightgray'] = SGIBRIGHTGRAY
+colors['sgichartreuse'] = SGICHARTREUSE
+colors['sgidarkgray'] = SGIDARKGRAY
+colors['sgigray12'] = SGIGRAY12
+colors['sgigray16'] = SGIGRAY16
+colors['sgigray32'] = SGIGRAY32
+colors['sgigray36'] = SGIGRAY36
+colors['sgigray52'] = SGIGRAY52
+colors['sgigray56'] = SGIGRAY56
+colors['sgigray72'] = SGIGRAY72
+colors['sgigray76'] = SGIGRAY76
+colors['sgigray92'] = SGIGRAY92
+colors['sgigray96'] = SGIGRAY96
+colors['sgilightblue'] = SGILIGHTBLUE
+colors['sgilightgray'] = SGILIGHTGRAY
+colors['sgiolivedrab'] = SGIOLIVEDRAB
+colors['sgisalmon'] = SGISALMON
+colors['sgislateblue'] = SGISLATEBLUE
+colors['sgiteal'] = SGITEAL
+colors['sienna'] = SIENNA
+colors['sienna1'] = SIENNA1
+colors['sienna2'] = SIENNA2
+colors['sienna3'] = SIENNA3
+colors['sienna4'] = SIENNA4
+colors['silver'] = SILVER
+colors['skyblue'] = SKYBLUE
+colors['skyblue1'] = SKYBLUE1
+colors['skyblue2'] = SKYBLUE2
+colors['skyblue3'] = SKYBLUE3
+colors['skyblue4'] = SKYBLUE4
+colors['slateblue'] = SLATEBLUE
+colors['slateblue1'] = SLATEBLUE1
+colors['slateblue2'] = SLATEBLUE2
+colors['slateblue3'] = SLATEBLUE3
+colors['slateblue4'] = SLATEBLUE4
+colors['slategray'] = SLATEGRAY
+colors['slategray1'] = SLATEGRAY1
+colors['slategray2'] = SLATEGRAY2
+colors['slategray3'] = SLATEGRAY3
+colors['slategray4'] = SLATEGRAY4
+colors['snow1'] = SNOW1
+colors['snow2'] = SNOW2
+colors['snow3'] = SNOW3
+colors['snow4'] = SNOW4
+colors['springgreen'] = SPRINGGREEN
+colors['springgreen1'] = SPRINGGREEN1
+colors['springgreen2'] = SPRINGGREEN2
+colors['springgreen3'] = SPRINGGREEN3
+colors['steelblue'] = STEELBLUE
+colors['steelblue1'] = STEELBLUE1
+colors['steelblue2'] = STEELBLUE2
+colors['steelblue3'] = STEELBLUE3
+colors['steelblue4'] = STEELBLUE4
+colors['tan'] = TAN
+colors['tan1'] = TAN1
+colors['tan2'] = TAN2
+colors['tan3'] = TAN3
+colors['tan4'] = TAN4
+colors['teal'] = TEAL
+colors['thistle'] = THISTLE
+colors['thistle1'] = THISTLE1
+colors['thistle2'] = THISTLE2
+colors['thistle3'] = THISTLE3
+colors['thistle4'] = THISTLE4
+colors['tomato1'] = TOMATO1
+colors['tomato2'] = TOMATO2
+colors['tomato3'] = TOMATO3
+colors['tomato4'] = TOMATO4
+colors['turquoise'] = TURQUOISE
+colors['turquoise1'] = TURQUOISE1
+colors['turquoise2'] = TURQUOISE2
+colors['turquoise3'] = TURQUOISE3
+colors['turquoise4'] = TURQUOISE4
+colors['turquoiseblue'] = TURQUOISEBLUE
+colors['violet'] = VIOLET
+colors['violetred'] = VIOLETRED
+colors['violetred1'] = VIOLETRED1
+colors['violetred2'] = VIOLETRED2
+colors['violetred3'] = VIOLETRED3
+colors['violetred4'] = VIOLETRED4
+colors['warmgrey'] = WARMGREY
+colors['wheat'] = WHEAT
+colors['wheat1'] = WHEAT1
+colors['wheat2'] = WHEAT2
+colors['wheat3'] = WHEAT3
+colors['wheat4'] = WHEAT4
+colors['white'] = WHITE
+colors['whitesmoke'] = WHITESMOKE
+colors['whitesmoke'] = WHITESMOKE
+colors['yellow1'] = YELLOW1
+colors['yellow2'] = YELLOW2
+colors['yellow3'] = YELLOW3
+colors['yellow4'] = YELLOW4
+
+colors = OrderedDict(sorted(colors.items(), key=lambda t: t[0]))
diff --git a/datasets/general_dataset.py b/datasets/general_dataset.py
new file mode 100644
index 0000000..e5b369d
--- /dev/null
+++ b/datasets/general_dataset.py
@@ -0,0 +1,207 @@
+import os
+import numpy as np
+import datasets.color_constants as cc
+from tools.lazy_decorator import *
+from typing import Tuple, List, Dict
+import logging
+
+
+class GeneralDataset:
+ """
+ Class used for reading in datasets for training/testing.
+ Parameterized in order to handle different kinds of datasets (e.g. k-fold datasets)
+ """
+
+ @property
+ def data_path(self) -> str:
+ return self._data_path
+
+ @property
+ def data(self) -> List[np.ndarray]:
+ return self._data
+
+ @property
+ def full_sized_data(self) -> Dict[str, np.ndarray]:
+ return self._full_sized_data
+
+ @property
+ def file_names(self) -> List[str]:
+ return self._file_names
+
+ @property
+ def train_pc_idx(self) -> List[int]:
+ return self._train_pc_idx
+
+ @property
+ def test_pc_idx(self) -> List[int]:
+ return self._test_pc_idx
+
+ def __init__(self, data_path: str, is_train: bool, test_sets: list,
+ downsample_prefix: str, is_colors: bool, is_laser: bool, n_classes=None):
+ self._test_sets = test_sets
+ self._downsample_prefix = downsample_prefix
+ self._is_colors = is_colors
+ self._is_laser = is_laser
+
+ # it is possible that there is no class information given for test sets
+ if n_classes is None:
+ self._is_class = True
+ else:
+ self._is_class = False
+ self._num_classes = n_classes
+
+ self._data_path = data_path
+ self._data, self._file_names, self._full_sized_data = self._load(is_train)
+
+ # log some dataset properties
+ logging.debug(f"number of features: {self.num_features}")
+ logging.debug(f"number of classes: {self.num_classes}")
+ logging.debug(f"number of training samples: {len(self.train_pc_idx)}")
+ logging.debug(f"number of test samples: {len(self.test_pc_idx)}")
+
+ @lazy_property
+ def num_classes(self) -> int:
+ """
+ calculate the number of unique class labels if class information is given in npy-file.
+ Otherwise, just return the number of classes which have been defined in the constructor
+ :return: number of classes for this dataset
+ """
+ if self._is_class:
+ # assuming that labels are in the last column
+ # counting unique class labels of all pointclouds
+ _num_classes = len(np.unique(np.concatenate([np.unique(pointcloud[:, -1])
+ for pointcloud in self.data])))
+
+ if _num_classes > len(self.label_colors()):
+ logging.warning(f"There are more classes than label colors for this dataset. "
+ f"If you want to plot your results, this will not work.")
+
+ return _num_classes
+ else:
+ return self._num_classes
+
+ @lazy_property
+ def normalization(self) -> np.ndarray:
+ """
+ before blob is fed into the neural network some normalization takes place in the batch generator
+ normalization factors specific for each dataset have to be provided
+ note: this property can be overriden by subclasses if another normalization is needed
+ :return: np.ndarray with normalization factors
+ """
+ _normalizer = np.array([1. for _ in range(self.num_features)])
+
+ if self._is_colors:
+ _normalizer[3:6] = 255. # normalize colors to [0,1]
+ if self._is_laser:
+ _normalizer[6] = 2048. # normalize laser [-1, 1]
+ elif self._is_laser:
+ _normalizer[3] = 2048. # normalize laser [-1, 1]
+
+ return _normalizer
+
+ @lazy_property
+ def num_features(self) -> int:
+ return 3 + self._is_colors * 3 + self._is_laser
+
+ @staticmethod
+ def label_colors() -> np.ndarray:
+ return np.array([cc.colors['brown'].npy,
+ cc.colors['darkgreen'].npy,
+ cc.colors['springgreen'].npy,
+ cc.colors['red1'].npy,
+ cc.colors['darkgray'].npy,
+ cc.colors['gray'].npy,
+ cc.colors['pink'].npy,
+ cc.colors['yellow1'].npy,
+ cc.colors['violet'].npy,
+ cc.colors['hotpink'].npy,
+ cc.colors['blue'].npy,
+ cc.colors['lightblue'].npy,
+ cc.colors['orange'].npy,
+ cc.colors['black'].npy])
+
+ def _load(self, is_train: bool) -> Tuple[List[np.ndarray], List[str], Dict[str, np.ndarray]]:
+ """
+ Note that we assume a folder hierarchy of DATA_PATH/SET_NO/{full_size, sample_X_Y, ...}/POINTCLOUD.npy
+ :param is_train: true iff training mode
+ :return: list of pointclouds and list of filenames
+ """
+ data_training_test = {}
+ full_sized_test_data = {}
+ names = set()
+
+ train_pc_names = set()
+ test_pc_names = set()
+
+ # pick
+ pick = [0, 1, 2]
+
+ if self._is_colors:
+ pick = pick + [3, 4, 5]
+
+ if self._is_laser:
+ pick = pick + [6]
+
+ if self._is_laser:
+ pick = pick + [3]
+
+ pick = pick + [-1]
+
+ for dirpath, dirnames, filenames in os.walk(self.data_path):
+ for filename in [f for f in filenames if f.endswith(".npy")]:
+ is_test_set = os.path.dirname(dirpath).split('/')[-1] in self._test_sets
+
+ if not is_test_set and not is_train:
+ # we do not have to load training examples if we only want to evaluate
+ continue
+
+ name = None
+ if os.path.basename(dirpath) == self._downsample_prefix:
+ # dimension of a single npy file: (number of points, number of features + label)
+ pointcloud_data = np.load(os.path.join(dirpath, filename))
+ pointcloud_data = pointcloud_data[:, pick]
+ pointcloud_data = pointcloud_data.astype(np.float32) # just to be sure!
+
+ name = filename.replace('.npy', '')
+ data_training_test[name] = pointcloud_data
+ elif os.path.basename(dirpath) == 'full_size':
+ if not is_train:
+ # for testing we consider full scale point clouds
+ if is_test_set:
+ # dimension of a single npy file: (number of points, number of features + label)
+ pointcloud_data = np.load(os.path.join(dirpath, filename))
+ pointcloud_data = pointcloud_data[:, pick]
+ pointcloud_data = pointcloud_data.astype(np.float32) # just to be sure!
+
+ name = filename.replace('.npy', '')
+ full_sized_test_data[name] = pointcloud_data
+
+ if name is not None:
+ names.add(name)
+
+ if is_test_set:
+ test_pc_names.add(name)
+ else:
+ train_pc_names.add(name)
+
+ names = sorted(names)
+
+ data_training_test = [data_training_test[key] for key in names]
+
+ self._train_pc_idx = sorted([names.index(name) for name in train_pc_names])
+ self._test_pc_idx = sorted([names.index(name) for name in test_pc_names])
+
+ return data_training_test, names, full_sized_test_data
+
+
+if __name__ == '__main__':
+ from tools.tools import setup_logger
+
+ setup_logger()
+
+ dataset = GeneralDataset(data_path='/fastwork/schult/stanford_indoor',
+ is_train=False,
+ test_sets=['area_3', 'area_2'],
+ downsample_prefix='sample_1_1',
+ is_colors=True,
+ is_laser=True)
diff --git a/doc/exploring_header.png b/doc/exploring_header.png
new file mode 100644
index 0000000..d8a2e08
Binary files /dev/null and b/doc/exploring_header.png differ
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_1.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_1.yaml
new file mode 100644
index 0000000..4b21529
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_1.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_1']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_2.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_2.yaml
new file mode 100644
index 0000000..e248312
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_2.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_2']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_3.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_3.yaml
new file mode 100644
index 0000000..2868a64
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_3.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_3']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_4.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_4.yaml
new file mode 100644
index 0000000..f5bedd4
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_4.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_4']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_5.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_5.yaml
new file mode 100644
index 0000000..add4a43
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_5.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_5']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_6.yaml b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_6.yaml
new file mode 100644
index 0000000..b81ac18
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_gru/s3dis_gru_area_6.yaml
@@ -0,0 +1,32 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_6']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: gru_neighbor_model
+ params: None
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 14
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ grid_x: 2
+ grid_y: 2
+ radius: 0.5
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+ gradient_clipping: 15
+train:
+ epochs: 61
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_1.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_1.yaml
new file mode 100644
index 0000000..7059001
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_1.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_1']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_2.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_2.yaml
new file mode 100644
index 0000000..7ff0194
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_2.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_2']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_3.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_3.yaml
new file mode 100644
index 0000000..c53cd3c
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_3.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_3']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_4.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_4.yaml
new file mode 100644
index 0000000..d50e7fd
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_4.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_4']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_5.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_5.yaml
new file mode 100644
index 0000000..7b52b3b
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_5.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_5']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 161
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_6.yaml b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_6.yaml
new file mode 100644
index 0000000..ec96199
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_mscu/s3dis_mscu_area_6.yaml
@@ -0,0 +1,29 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_6']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: multi_scale_cu_model
+batch_generator:
+ name: multi_scale_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ root_cache_path: cache
+ radii: [0.25, 0.5, 1.0]
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_1.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_1.yaml
new file mode 100644
index 0000000..9594447
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_1.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_1']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_2.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_2.yaml
new file mode 100644
index 0000000..3650bbb
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_2.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_2']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_3.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_3.yaml
new file mode 100644
index 0000000..b932d6f
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_3.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_3']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_4.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_4.yaml
new file mode 100644
index 0000000..46e1249
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_4.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_4']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_5.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_5.yaml
new file mode 100644
index 0000000..df453a2
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_5.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_5']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_6.yaml b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_6.yaml
new file mode 100644
index 0000000..e8713ce
--- /dev/null
+++ b/experiments/iccvw_paper_2017/s3dis_pointnet/s3dis_pointnet_area_6.yaml
@@ -0,0 +1,30 @@
+modus: TRAIN_VAL
+dataset:
+ name: general_dataset
+ num_classes: 13
+ data_path: dataset/stanford_indoor/
+ test_sets: ['Area_6']
+ downsample_prefix: sample_0.03
+ colors: True
+ laser: False
+model:
+ name: pointnet
+batch_generator:
+ name: neighboring_grid_batch_generator
+ params:
+ batch_size: 24
+ num_points: 4096
+ blob_size: 0.5
+ metric: chebyshev
+ augmentation: False
+ radius: 0.5
+ grid_x: 1
+ grid_y: 1
+optimizer:
+ name: exponential_decay_adam
+ params:
+ initial_lr: 0.001
+ decay_step: 300000
+ decay_rate: 0.5
+train:
+ epochs: 61
\ No newline at end of file
diff --git a/models/__init__.py b/models/__init__.py
new file mode 100644
index 0000000..b429857
--- /dev/null
+++ b/models/__init__.py
@@ -0,0 +1,4 @@
+from .multi_block_model import *
+from .multi_scale_cu_model import *
+from .pointnet import *
+from .gru_neighbor_model import *
diff --git a/models/gru_neighbor_model.py b/models/gru_neighbor_model.py
new file mode 100644
index 0000000..61713dc
--- /dev/null
+++ b/models/gru_neighbor_model.py
@@ -0,0 +1,73 @@
+from tools import tf_util
+from .multi_block_model import *
+from batch_generators import *
+from typing import Dict
+
+
+class GruNeighborModel(MultiBlockModel):
+ """
+ parameterized version of a neighboring model using GRU units as described in the paper
+ """
+
+ def __init__(self, batch_generator: BatchGenerator, params: Dict[str, list]):
+ super().__init__(batch_generator)
+
+ self._bn_decay = 0.9
+
+ @lazy_property
+ def _prediction_helper(self):
+ num_point = self.batch_generator.num_points
+ batch_size = self.batch_generator.batch_size
+
+ dims = self.batch_generator.input_shape
+
+ cumulated_batch_size = dims[0] * dims[1]
+ time = dims[1]
+
+ input_image = tf.reshape(self.batch_generator.pointclouds_pl, (cumulated_batch_size, dims[2], dims[3]))
+ input_image = tf.expand_dims(input_image, -1)
+
+ net = input_image
+
+ # CONV
+ net = tf_util.conv2d(net, 64, [1, self.batch_generator.dataset.num_features + 3], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv1', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 64, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv2', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 64, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv3', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv4', bn_decay=self._bn_decay)
+ points_feat1 = tf_util.conv2d(net, 1024, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv5', bn_decay=self._bn_decay)
+ # MAX
+ pc_feat1 = tf_util.max_pool2d(points_feat1, [num_point, 1], padding='VALID', scope='maxpool1')
+
+ # FC
+ pc_feat2 = tf.reshape(pc_feat1, [cumulated_batch_size, -1])
+ pc_feat2 = tf_util.fully_connected(pc_feat2, 256, bn=True, is_training=self.is_training_pl, scope='fc1', bn_decay=self._bn_decay)
+ pc_feat2 = tf_util.fully_connected(pc_feat2, 64, bn=True, is_training=self.is_training_pl, scope='fc2', bn_decay=self._bn_decay)
+ tf.summary.histogram("pc_feat2_b", pc_feat2[:])
+ pc_feat2_ = tf.reshape(pc_feat2, (batch_size, time, 64))
+
+ pc_feat2_ = tf_util.gru_seq(pc_feat2_, 64, batch_size, time, False, scope='gru1')
+
+ pc_feat2_ = tf.reshape(pc_feat2_, (cumulated_batch_size, 64))
+ tf.summary.histogram("pc_feat2_a", pc_feat2_[:])
+
+ pc_feat2 = tf.concat([pc_feat2, pc_feat2_], axis=1)
+ # CONCAT
+ pc_feat2_expand = tf.tile(tf.reshape(pc_feat2, [cumulated_batch_size, 1, 1, -1]), [1, num_point, 1, 1])
+ points_feat2_concat = tf.concat(axis=3, values=[points_feat1, pc_feat2_expand])
+
+ # CONV
+ net2 = tf_util.conv2d(points_feat2_concat, 512, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv6')
+ net2 = tf_util.conv2d(net2, 256, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv7')
+ net2 = tf_util.dropout(net2, keep_prob=0.7, is_training=self.is_training_pl, scope='dp1')
+ net2 = tf_util.conv2d(net2, self.batch_generator.dataset.num_classes, [1, 1], padding='VALID', stride=[1, 1],
+ activation_fn=None, scope='conv8')
+ net2 = tf.squeeze(net2, [2])
+
+ return tf.reshape(net2, (batch_size, time, num_point, -1))
diff --git a/models/multi_block_model.py b/models/multi_block_model.py
new file mode 100644
index 0000000..25c6f33
--- /dev/null
+++ b/models/multi_block_model.py
@@ -0,0 +1,75 @@
+import tensorflow as tf
+from abc import *
+from tools.lazy_decorator import *
+
+
+class MultiBlockModel(ABC):
+
+ def __init__(self, batch_generator):
+ self.batch_generator = batch_generator
+
+ self._create_placeholders()
+
+ @lazy_function
+ def _create_placeholders(self):
+
+ self.eval_per_epoch_pl = tf.placeholder(tf.float32,
+ name='evaluation_pl',
+ shape=(3, 1))
+
+ self.block_mask_bool = tf.cast(self.batch_generator.mask_pl, tf.bool) # shape: (BS, 1)
+ self.labels = tf.boolean_mask(self.batch_generator.labels_pl, self.block_mask_bool) # shape: (BS, N)
+ self.labels = tf.cast(self.labels, tf.int32)
+
+ num_blocks = tf.reduce_sum(self.batch_generator.mask_pl) # num of blocks per batch
+ self.num_blocks = tf.cast(num_blocks, tf.float32)
+
+ self.is_training_pl = tf.placeholder(tf.bool,
+ name='is_training_pl',
+ shape=())
+
+ @lazy_property
+ def prediction(self):
+ pred = self._prediction_helper
+ # Apply mask to prediction and labels
+ return tf.boolean_mask(pred, self.block_mask_bool) # shape: (BS, N, K)
+
+ @lazy_property
+ def prediction_sm(self):
+ return tf.nn.softmax(self.prediction)
+
+ @lazy_property
+ @abstractmethod
+ def _prediction_helper(self):
+ raise NotImplementedError('Should be defined in subclass')
+
+ @lazy_property
+ def loss(self):
+ loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.prediction,
+ labels=self.labels)
+ return tf.reduce_mean(loss)
+
+ @lazy_property
+ def correct(self):
+ return tf.equal(tf.argmax(self.prediction, 2), tf.to_int64(self.labels))
+
+ @lazy_property
+ def accuracy(self):
+ return tf.reduce_sum(tf.cast(self.correct, tf.float32)) / \
+ tf.cast(self.batch_generator.num_points * self.num_blocks, tf.float32)
+
+ @lazy_function
+ def register_summary(self):
+ tf.summary.scalar('loss', self.loss)
+ tf.summary.scalar('avg_acc', self.eval_per_epoch_pl[0, 0])
+ tf.summary.scalar('avg_iou', self.eval_per_epoch_pl[1, 0])
+ tf.summary.scalar('avg_loss', self.eval_per_epoch_pl[2, 0])
+ tf.summary.scalar('accuracy', self.accuracy)
+
+ @property
+ def input_shape(self):
+ return self.batch_generator.pointclouds_pl.get_shape().as_list()
+
+ @property
+ def labels_shape(self):
+ return self.batch_generator.labels_pl.get_shape().as_list()
diff --git a/models/multi_scale_cu_model.py b/models/multi_scale_cu_model.py
new file mode 100644
index 0000000..ec5e50c
--- /dev/null
+++ b/models/multi_scale_cu_model.py
@@ -0,0 +1,87 @@
+from tools import tf_util
+from .multi_block_model import *
+from batch_generators import *
+from typing import Dict
+
+
+class MultiScaleCuModel(MultiBlockModel):
+ """
+ parameterized version of a multiscale pointnet with consolidation units
+ """
+
+ def __init__(self, batch_generator: BatchGenerator, params: Dict[str, list]):
+ """
+ initialization of multi scale model with consolidation units
+ :param batch_generator:
+ :param params: contains parameter concerning
+ - ilc_sizes (filter sizes for input level context)
+ - olc_sizes (consolidation units' sizes)
+ - olc_sizes (filter sizes for output level context)
+ """
+ super().__init__(batch_generator)
+
+ if params is None:
+ # standard parameters from the paper
+ self._ilc_sizes = [64, 128]
+ self._cu_sizes = [256, 1024]
+ self._olc_sizes = [512, 128]
+ else:
+ # load custom parameters
+ self._ilc_sizes = params['ilc_sizes']
+ self._cu_sizes = params['cu_sizes']
+ self._olc_sizes = params['olc_sizes']
+
+ self._bn_decay = 0.9
+
+ @lazy_property
+ def _prediction_helper(self):
+ # pointcloud placeholder has the following format: BxSxNxF
+ # Batch B
+ # Scale S
+ # Point P
+ # Feature F
+
+ # allow an arbitrary number of scales
+ scales = [tf.expand_dims(self.batch_generator.pointclouds_pl[:, i, ...], axis=2)
+ for i in range(self.input_shape[1])]
+
+ num_points = self.batch_generator.num_points
+
+ # store reference to original scale for later concatenating
+ scale1 = scales[1]
+
+ ''' INPUT-LEVEL CONTEXT '''
+ for scale_index in range(len(scales)):
+ # build global feature extractor for each scale independently
+ for size_index, ilc_size in enumerate(self._ilc_sizes):
+ scales[scale_index] = tf_util.conv2d(scales[scale_index], ilc_size, [1, 1], padding='VALID',
+ stride=[1, 1],
+ bn=True, is_training=self.is_training_pl,
+ scope='ilc_conv' + str(size_index) + 'sc' + str(scale_index),
+ bn_decay=self._bn_decay)
+ # calculate global features for each scale
+ scales[scale_index] = tf.reduce_max(scales[scale_index], axis=1,
+ keep_dims=True, name="gf_sc" + str(scale_index))
+
+ ''' CONCATENATE GLOBAL FEATURES OF ALL SCALES '''
+ net = tf.concat(values=scales, axis=3)
+ net = tf.tile(net, [1, num_points, 1, 1], name='repeat')
+ net = tf.concat(values=[scale1, net], axis=3)
+
+ ''' CONSOLIDATION UNIT SECTION '''
+ for index, cu_size in enumerate(self._cu_sizes):
+ net = tf_util.consolidation_unit(net, size=cu_size, scope='cu' + str(index), bn=True,
+ bn_decay=self._bn_decay, is_training=self.is_training_pl)
+
+ ''' OUTPUT-LEVEL CONTEXT '''
+ for size_index, olc_size in enumerate(self._olc_sizes):
+ net = tf_util.conv2d(net, olc_size, [1, 1], padding='VALID', stride=[1, 1], bn=True,
+ is_training=self.is_training_pl,
+ scope='olc_conv' + str(size_index), bn_decay=self._bn_decay)
+
+ net = tf_util.conv2d(net, self.batch_generator.dataset.num_classes, [1, 1], padding='VALID', stride=[1, 1],
+ activation_fn=None, scope='conv_output')
+
+ net = tf.transpose(net, [0, 2, 1, 3])
+
+ return net
diff --git a/models/pointnet.py b/models/pointnet.py
new file mode 100644
index 0000000..ba3794f
--- /dev/null
+++ b/models/pointnet.py
@@ -0,0 +1,63 @@
+from tools import tf_util
+from .multi_block_model import *
+from batch_generators import *
+from typing import Dict
+
+
+class Pointnet(MultiBlockModel):
+ """
+ Pointnet network architecture from
+ PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
+ Author: Qi, et al.
+ """
+
+ def __init__(self, batch_generator: BatchGenerator, params: Dict[str, list]):
+ super().__init__(batch_generator)
+ self._bn_decay = 0.9
+
+ @lazy_property
+ def _prediction_helper(self):
+ # pointcloud placeholder has the following format: BxSxNxF
+ # Batch B
+ # Scale S
+ # Point P
+ # Feature F
+ num_points = self.batch_generator.pointclouds_pl.get_shape().as_list()[2]
+
+ image_pl = tf.transpose(self.batch_generator.pointclouds_pl, [0, 2, 1, 3])
+
+ # CONV
+ net = tf_util.conv2d(image_pl, 64, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv1', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 64, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv2', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 64, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv3', bn_decay=self._bn_decay)
+ net = tf_util.conv2d(net, 128, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv4', bn_decay=self._bn_decay)
+ points_feat1 = tf_util.conv2d(net, 1024, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv5', bn_decay=self._bn_decay)
+ # MAX
+ pc_feat1 = tf.reduce_max(points_feat1, axis=1, keep_dims=True, name="global_features")
+ # FC
+ pc_feat1 = tf.reshape(pc_feat1, [-1, 1024])
+ pc_feat1 = tf_util.fully_connected(pc_feat1, 256, bn=True, is_training=self.is_training_pl, scope='fc1',
+ bn_decay=self._bn_decay)
+ pc_feat1 = tf_util.fully_connected(pc_feat1, 128, bn=True, is_training=self.is_training_pl, scope='fc2',
+ bn_decay=self._bn_decay)
+
+ # CONCAT
+ pc_feat1_expand = tf.tile(tf.reshape(pc_feat1, [-1, 1, 1, 128]), [1, num_points, 1, 1])
+ points_feat1_concat = tf.concat(axis=3, values=[points_feat1, pc_feat1_expand])
+
+ # CONV
+ net = tf_util.conv2d(points_feat1_concat, 512, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv6')
+ net = tf_util.conv2d(net, 256, [1, 1], padding='VALID', stride=[1, 1],
+ bn=True, is_training=self.is_training_pl, scope='conv7')
+ net = tf_util.dropout(net, keep_prob=0.7, is_training=self.is_training_pl, scope='dp1')
+ net = tf_util.conv2d(net, self.batch_generator.dataset.num_classes, [1, 1], padding='VALID', stride=[1, 1],
+ activation_fn=None, scope='conv8')
+
+ net = tf.transpose(net, [0, 2, 1, 3])
+ return net
diff --git a/optimizers/__init__.py b/optimizers/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/optimizers/exponential_decay_adam.py b/optimizers/exponential_decay_adam.py
new file mode 100644
index 0000000..654a24e
--- /dev/null
+++ b/optimizers/exponential_decay_adam.py
@@ -0,0 +1,48 @@
+from models import *
+from tools.lazy_decorator import *
+
+
+class ExponentialDecayAdam:
+ def __init__(self, model, params: dict):
+ # Variable to increment by one after the variables have been updated by the optimizers
+ # this helps to keep track of the progress of the training (e.g. adapting learning rate and decay)
+ self.global_step = tf.Variable(0, name='global_step_counter')
+ self._model = model
+ self._base_learning_rate = params['initial_lr']
+ self._decay_step = params['decay_step']
+ self._decay_rate = params['decay_rate']
+
+ self._gradient_clipping = params.get('gradient_clipping')
+
+ @lazy_property
+ def learning_rate(self):
+ learning_rate = tf.cond(self.global_step * self._model.batch_generator.batch_size < self._decay_step,
+ lambda: tf.constant(self._base_learning_rate),
+ lambda: tf.train.exponential_decay(
+ self._base_learning_rate, # Base learning rate.
+ self.global_step * self._model.batch_generator.batch_size - self._decay_step,
+ self._decay_step // 2, # Decay step.
+ self._decay_rate, # Decay rate.
+ staircase=False))
+ return learning_rate
+
+ @lazy_function
+ def register_summary(self):
+ tf.summary.scalar('global_step', self.global_step)
+ tf.summary.scalar('learning_rate', self.learning_rate)
+
+ @lazy_property
+ def optimize(self):
+ if self._gradient_clipping is not None:
+ # Clipping gradients - check if gradients explode
+ optimizer = tf.train.AdamOptimizer(self.learning_rate)
+ gradients = tf.gradients(self._model.loss, tf.trainable_variables())
+ trainables = tf.trainable_variables()
+ clipped_gradients, _ = tf.clip_by_global_norm(gradients, self._gradient_clipping)
+
+ extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
+ with tf.control_dependencies(extra_update_ops):
+ return optimizer.apply_gradients(zip(clipped_gradients, trainables), global_step=self.global_step)
+ else:
+ optimizer = tf.train.AdamOptimizer(self.learning_rate)
+ return optimizer.minimize(self._model.loss, global_step=self.global_step)
diff --git a/run.py b/run.py
new file mode 100644
index 0000000..e5039a7
--- /dev/null
+++ b/run.py
@@ -0,0 +1,324 @@
+import argparse
+from datasets import *
+from models import *
+import yaml
+from tools.tools import *
+from pathlib import Path
+import tools.evaluation as evaluation
+import shutil
+import logging
+
+avg_iou_per_epoch = [0]
+avg_class_acc_per_epoch = [0]
+avg_loss_per_epoch = [0]
+
+
+def main(config: dict, log_dir: str, isTrain: bool):
+ with tf.Graph().as_default():
+ Dataset = import_class('datasets', config['dataset']['name'])
+ dataset = Dataset(config['dataset']['data_path'],
+ is_train=isTrain,
+ test_sets=config['dataset']['test_sets'],
+ downsample_prefix=config['dataset']['downsample_prefix'],
+ is_colors=config['dataset']['colors'],
+ is_laser=config['dataset']['laser'],
+ n_classes=config['dataset']['num_classes'])
+
+ BatchGenerator = import_class('batch_generators', config['batch_generator']['name'])
+ batch_generator = BatchGenerator(dataset, config['batch_generator']['params'])
+
+ Model = import_class('models', config['model']['name'])
+ model = Model(batch_generator, config['model'].get('params'))
+
+ if isTrain:
+ Optimizer = import_class('optimizers', config['optimizer']['name'])
+ optimizer = Optimizer(model, config['optimizer']['params'])
+
+ sess, ops, writer, saver, epoch_start = prepare_network(model, log_dir, optimizer, isTrain=isTrain,
+ model_path=config.get('resume_path'))
+
+ for epoch in range(epoch_start, config['train']['epochs']):
+ train_one_epoch(sess, ops, writer, model, epoch, config['train']['epochs'])
+ eval_one_epoch(sess, ops, model, dataset, epoch, config['train']['epochs'])
+
+ # Save the variables to disk.
+ if epoch % 10 == 0:
+ path = Path(f"{log_dir}/model_ckpts")
+ path.mkdir(parents=True, exist_ok=True)
+ saver.save(sess, os.path.join(f"{log_dir}/model_ckpts",
+ f"{epoch+1:03d}_model.ckpt"))
+ else:
+ sess, ops, writer, saver, _ = prepare_network(model, log_dir,
+ isTrain=isTrain, model_path=config['model_path'])
+ predict_on_test_set(sess, ops, model, dataset, log_dir)
+
+
+def prepare_network(model: MultiBlockModel, log_dir: str, optimizer=None, isTrain=True, model_path=None):
+ # Create a session
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ config.allow_soft_placement = True
+ config.log_device_placement = False
+ sess = tf.Session(config=config)
+
+ with tf.device('/gpu:0'):
+ model.register_summary()
+ if optimizer is not None:
+ optimizer.register_summary()
+
+ # Add summary writers
+ merged = tf.summary.merge_all()
+ train_writer = tf.summary.FileWriter(os.path.join(log_dir, 'tensorflow'), sess.graph)
+
+ if optimizer is not None:
+ ops = {'pointclouds_pl': model.batch_generator.pointclouds_pl,
+ 'labels_pl': model.batch_generator.labels_pl,
+ 'mask_pl': model.batch_generator.mask_pl,
+ 'eval_per_epoch_pl': model.eval_per_epoch_pl,
+ 'is_training_pl': model.is_training_pl,
+ 'pred': model.prediction,
+ 'pred_sm': model.prediction_sm,
+ 'loss': model.loss,
+ 'train_op': optimizer.optimize,
+ 'merged': merged,
+ 'step': optimizer.global_step,
+ 'correct': model.correct,
+ 'labels': model.labels,
+ 'handle_pl': model.batch_generator.handle_pl,
+ 'iterator_train': model.batch_generator.iterator_train,
+ 'iterator_test': model.batch_generator.iterator_test,
+ 'cloud_ids_pl': model.batch_generator.cloud_ids_pl,
+ 'point_ids_pl': model.batch_generator.point_ids_pl,
+ 'next_element': model.batch_generator.next_element
+ }
+ else:
+ ops = {'pointclouds_pl': model.batch_generator.pointclouds_pl,
+ 'labels_pl': model.batch_generator.labels_pl,
+ 'mask_pl': model.batch_generator.mask_pl,
+ 'eval_per_epoch_pl': model.eval_per_epoch_pl,
+ 'is_training_pl': model.is_training_pl,
+ 'pred': model.prediction,
+ 'pred_sm': model.prediction_sm,
+ 'loss': model.loss,
+ 'correct': model.correct,
+ 'labels': model.labels,
+ 'handle_pl': model.batch_generator.handle_pl,
+ 'iterator_train': model.batch_generator.iterator_train,
+ 'iterator_test': model.batch_generator.iterator_test,
+ 'cloud_ids_pl': model.batch_generator.cloud_ids_pl,
+ 'point_ids_pl': model.batch_generator.point_ids_pl,
+ 'next_element': model.batch_generator.next_element
+ }
+
+ # Init variables
+ init = tf.global_variables_initializer()
+ sess.run(init, {model.is_training_pl: isTrain})
+
+ # Add ops to save and restore all the variables.
+ saver = tf.train.Saver()
+
+ epoch_number = 0
+
+ if model_path is not None:
+ # resume training
+ latest_checkpoint_path = tf.train.latest_checkpoint(model_path)
+ # extract latest training epoch number
+ epoch_number = int(latest_checkpoint_path.split('/')[-1].split('_')[0])
+
+ saver.restore(sess, latest_checkpoint_path)
+
+ return sess, ops, train_writer, saver, epoch_number
+
+
+def train_one_epoch(sess, ops, train_writer, model, epoch, max_epoch):
+ model.batch_generator.shuffle()
+
+ for _ in tqdm(range(model.batch_generator.num_train_batches),
+ desc=f"Running training epoch {epoch+1:03d} / {max_epoch:03d}"):
+
+ a = np.reshape(np.array(avg_class_acc_per_epoch[-1]), [1, 1])
+ b = np.reshape(np.array(avg_iou_per_epoch[-1]), [1, 1])
+ c = np.reshape(np.array(avg_loss_per_epoch[-1]), [1, 1])
+ eval_per_epoch = np.concatenate((a, b, c))
+
+ handle_train = sess.run(ops['iterator_train'].string_handle())
+ feed_dict = {ops['is_training_pl']: True,
+ ops['eval_per_epoch_pl']: eval_per_epoch,
+ ops['handle_pl']: handle_train}
+
+ start_time = time.time()
+
+ summary, step, _, loss_val, pc_val, pred_val, labels_val, correct_val = sess.run(
+ [ops['merged'], ops['step'], ops['train_op'], ops['loss'],
+ ops['pointclouds_pl'],
+ ops['pred'], ops['labels'], ops['correct']],
+ feed_dict=feed_dict)
+
+ elapsed_time = time.time() - start_time
+ summary2 = tf.Summary()
+ summary2.value.add(tag='secs_per_iter', simple_value=elapsed_time)
+ train_writer.add_summary(summary2, step)
+ train_writer.add_summary(summary, step)
+
+
+def eval_one_epoch(sess, ops, model, dataset, epoch, max_epoch):
+ total_correct = 0
+ total_seen = 0
+ loss_sum = 0
+
+ # Compute avg IoU over classes
+ total_seen_class = [0 for _ in range(dataset.num_classes)] # true_pos + false_neg i.e. all points from this class
+ total_correct_class = [0 for _ in range(dataset.num_classes)] # true_pos
+ total_pred_class = [0 for _ in range(dataset.num_classes)] # true_pos + false_pos i.e. num pred classes
+
+ overall_acc = []
+
+ for _ in tqdm(range(model.batch_generator.num_test_batches),
+ desc='Running evaluation epoch %04d / %04d' % (epoch+1, max_epoch)):
+
+ a = np.reshape(np.array(avg_class_acc_per_epoch[-1]), [1, 1])
+ b = np.reshape(np.array(avg_iou_per_epoch[-1]), [1, 1])
+ c = np.reshape(np.array(avg_loss_per_epoch[-1]), [1, 1])
+ eval_per_epoch = np.concatenate((a, b, c))
+
+ handle_test = sess.run(ops['iterator_test'].string_handle())
+ feed_dict = {ops['is_training_pl']: False,
+ ops['eval_per_epoch_pl']: eval_per_epoch,
+ ops['handle_pl']: handle_test}
+
+ _, step, loss_val, pred_val, correct_val, labels_val, batch_mask, batch_cloud_ids, batch_point_ids = sess.run(
+ [ops['merged'], ops['step'], ops['loss'],
+ ops['pred_sm'], ops['correct'], ops['labels'],
+ ops['mask_pl'], ops['cloud_ids_pl'], ops['point_ids_pl']], feed_dict=feed_dict)
+
+ total_correct += np.sum(correct_val) # shape: scalar
+ total_seen += pred_val.shape[0] * pred_val.shape[1]
+
+ overall_acc.append(total_correct / total_seen)
+
+ loss_sum += loss_val
+
+ pred_val = np.argmax(pred_val, 2) # shape: (BS*B' x N)
+
+ for i in range(labels_val.shape[0]): # iterate over blocks
+ for j in range(labels_val.shape[1]): # iterate over points in block
+ lbl_gt = int(labels_val[i, j])
+ lbl_pred = int(pred_val[i, j])
+ total_seen_class[lbl_gt] += 1
+ total_correct_class[lbl_gt] += (lbl_pred == lbl_gt)
+ total_pred_class[lbl_pred] += 1
+
+ iou_per_class = np.zeros(dataset.num_classes)
+ iou_per_class_mask = np.zeros(dataset.num_classes, dtype=np.int8)
+ for i in range(dataset.num_classes):
+ denominator = float(total_seen_class[i] + total_pred_class[i] - total_correct_class[i])
+
+ if denominator != 0:
+ iou_per_class[i] = total_correct_class[i] / denominator
+ else:
+ iou_per_class_mask[i] = 1
+
+ iou_per_class_masked = np.ma.array(iou_per_class, mask=iou_per_class_mask)
+
+ total_seen_class_mask = [1 if seen == 0 else 0 for seen in total_seen_class]
+
+ class_acc = np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float)
+
+ class_acc_masked = np.ma.array(class_acc, mask=total_seen_class_mask)
+
+ avg_iou = iou_per_class_masked.mean()
+ avg_loss = loss_sum / float(total_seen / model.batch_generator.num_points)
+ avg_class_acc = class_acc_masked.mean()
+ avg_class_acc_per_epoch.append(avg_class_acc)
+ avg_iou_per_epoch.append(avg_iou)
+ avg_loss_per_epoch.append(avg_loss)
+
+ logging.info(f"[Epoch {epoch+1:03d}] avg class acc: {avg_class_acc}")
+ logging.info(f"[Epoch {epoch+1:03d}] avg iou: {avg_iou}")
+ logging.info(f"[Epoch {epoch+1:03d}] avg overall acc: {np.mean(overall_acc)}")
+
+
+def predict_on_test_set(sess, ops, model, dataset: GeneralDataset, log_dir: str):
+ is_training = False
+
+ cumulated_result = {}
+
+ for _ in tqdm(range(model.batch_generator.num_test_batches)):
+ handle_test = sess.run(ops['iterator_test'].string_handle())
+ feed_dict = {ops['is_training_pl']: is_training,
+ ops['eval_per_epoch_pl']: np.zeros((3,1)),
+ ops['handle_pl']: handle_test}
+
+ loss_val, pred_val, correct_val, labels_val, batch_mask, batch_cloud_ids, batch_point_ids = sess.run(
+ [ops['loss'], ops['pred_sm'], ops['correct'], ops['labels'],
+ ops['mask_pl'], ops['cloud_ids_pl'], ops['point_ids_pl']], feed_dict=feed_dict)
+
+ num_classes = pred_val.shape[2]
+ num_batches = pred_val.shape[0]
+
+ batch_mask = np.array(batch_mask, dtype=bool) # shape: (BS, B) - convert mask to bool
+ batch_point_ids = batch_point_ids[batch_mask] # shape: (B, N)
+ batch_cloud_ids = batch_cloud_ids[batch_mask] # shape: (B)
+
+ for batch_id in range(num_batches):
+ pc_id = batch_cloud_ids[batch_id]
+ pc_name = dataset.file_names[pc_id]
+
+ for point_in_batch, point_id in enumerate(batch_point_ids[batch_id, :]):
+ num_fs_properties = dataset.data[pc_id].shape[1]
+ if pc_name not in cumulated_result:
+ # if there is not information about the point cloud so far, initialize it
+ # label -1 means that there is not label given so far
+ # cumulate predictions for the same point
+ cumulated_result[pc_name] = np.zeros((dataset.data[pc_id].shape[0],
+ num_fs_properties + num_classes + 1))
+ cumulated_result[pc_name][:, :num_fs_properties] = dataset.data[pc_id]
+ cumulated_result[pc_name][:, -1] = -1
+
+ cumulated_result[pc_name][point_id, num_fs_properties:-1] += pred_val[batch_id, point_in_batch]
+ cumulated_result[pc_name][point_id, -1] = np.argmax(cumulated_result[pc_name][point_id,
+ num_fs_properties:-1])
+
+ for key in tqdm(cumulated_result.keys(), desc='knn interpolation for full sized point cloud'):
+ cumulated_result[key] = evaluation.knn_interpolation(cumulated_result[key], dataset.full_sized_data[key])
+
+ class_acc, class_iou, overall_acc = evaluation.calculate_scores(cumulated_result, dataset.num_classes)
+
+ logging.info(f" overall accuracy: {overall_acc}")
+ logging.info(f"mean class accuracy: {np.nanmean(class_acc)}")
+ logging.info(f" mean iou: {np.nanmean(class_iou)}")
+
+ for i in range(dataset.num_classes):
+ logging.info(f"accuracy for class {i}: {class_acc[i]}")
+ logging.info(f" iou for class {i}: {class_iou[i]}")
+
+ evaluation.save_npy_results(cumulated_result, log_dir)
+ evaluation.save_pc_as_obj(cumulated_result, dataset.label_colors(), log_dir)
+
+
+if __name__ == '__main__':
+ log_dir = setup_logger()
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--config", help="experiment definition file", metavar="FILE", required=True)
+ args = parser.parse_args()
+
+ params = parser.parse_args()
+
+ with open(params.config, 'r') as stream:
+ try:
+ config = yaml.load(stream)
+ # backup config file
+ shutil.copy(params.config, log_dir)
+
+ isTrain = False
+
+ if config['modus'] == 'TRAIN_VAL':
+ isTrain = True
+ elif config['modus'] == 'TEST':
+ isTrain = False
+
+ main(config, log_dir, isTrain)
+ except yaml.YAMLError as exc:
+ logging.error('Configuration file could not be read')
+ exit(1)
diff --git a/tools/__init__.py b/tools/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/downsample.py b/tools/downsample.py
new file mode 100644
index 0000000..8a845c0
--- /dev/null
+++ b/tools/downsample.py
@@ -0,0 +1,88 @@
+"""
+Downsample full sized point clouds in order to speed up batch generation as well as better block representations.
+Resulting point clouds will be saved at the appropriate positions in the file system
+(For further information consult the wiki)
+"""
+
+import numpy as np
+import argparse
+import tools
+from termcolor import colored
+import os
+from tqdm import tqdm
+
+
+def blockwise_uniform_downsample(data_labels, cell_size):
+ data_dim = data_labels.shape[1] - 1
+
+ number_classes = int(data_labels[:, -1].max()) + 1 # counting starts with label 0
+
+ d = {}
+ for i in tqdm(range(data_labels.shape[0]), desc='downsampling of points'):
+ # create block boundaries
+ x = int(data_labels[i, 0] / cell_size)
+ y = int(data_labels[i, 1] / cell_size)
+ z = int(data_labels[i, 2] / cell_size)
+
+ # add space for one hot encoding (used for easier class occurence counting)
+ # and counter value at the end of the row
+ new_tuple = np.zeros([data_labels.shape[1] + number_classes], dtype=float)
+ new_tuple[0:data_dim] = data_labels[i, 0:-1]
+ new_tuple[data_dim+int(data_labels[i, -1])] = 1 # set label to one for the corresponding class
+ new_tuple[-1] = 1 # count elements in the block
+
+ # note: elementwise addition in numpy arrays!!!
+ try:
+ d[(x, y, z)] += new_tuple
+ except:
+ d[(x, y, z)] = new_tuple
+
+ data = []
+ labels = []
+
+ for _, v in d.items():
+ N = v[-1] # number of points in voxel
+
+ # aggregate points in block to one normalized point
+ data.append([v[i] / N for i in range(data_dim)])
+
+ # find most prominent label (excluding counter and not in one hot encoding anymore)
+ labels.append(np.argmax(v[data_dim:-1]))
+
+ data = np.stack(data, axis=0)
+ labels = np.stack(labels, axis=0)
+
+ data_labels_new = np.hstack([data, np.expand_dims(labels, 1)]).astype(np.float32)
+ return data_labels_new
+
+
+def main(params):
+ for dirpath, dirnames, filenames in os.walk(params.data_dir):
+ if os.path.basename(dirpath) == 'full_size':
+ for filename in [f for f in filenames if f.endswith(".npy")]:
+ print(f"downsampling {filename} in progress ...")
+ data_labels = np.load(os.path.join(dirpath, filename))
+ sampled_data_labels = blockwise_uniform_downsample(data_labels, params.cell_size)
+
+ out_folder = os.path.join(os.path.dirname(dirpath), f"sample_{params.cell_size}")
+
+ if not os.path.exists(out_folder):
+ os.makedirs(out_folder)
+
+ np.save(os.path.join(out_folder, filename), sampled_data_labels)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Convert original data set to uniformly downsampled numpy version '
+ 'in order to speed up batch generation '
+ 'as well as better block representations')
+
+ parser.add_argument('--data_dir', required=True, help='root directory of original data')
+ parser.add_argument('--cell_size', type=float, default=0.03, help='width/length of downsampling cell')
+ params = parser.parse_args()
+
+ tools.pretty_print_arguments(params)
+
+ main(params)
+
+ print(colored('Finished successfully', 'green'))
diff --git a/tools/evaluation.py b/tools/evaluation.py
new file mode 100644
index 0000000..ea72b67
--- /dev/null
+++ b/tools/evaluation.py
@@ -0,0 +1,111 @@
+"""
+Contains methods for evaluating and exporting the result of the network
+"""
+
+import os
+import numpy as np
+from typing import Dict
+from tqdm import tqdm
+from sklearn.neighbors import BallTree
+
+
+def knn_interpolation(cumulated_pc: np.ndarray, full_sized_data: np.ndarray, k=5):
+ """
+ Using k-nn interpolation to find labels of points of the full sized pointcloud
+ :param cumulated_pc: cumulated pointcloud results after running the network
+ :param full_sized_data: full sized point cloud
+ :param k: k for k nearest neighbor interpolation
+ :return: pointcloud with predicted labels in last column and ground truth labels in last but one column
+ """
+
+ labeled = cumulated_pc[cumulated_pc[:, -1] != -1]
+ to_be_predicted = full_sized_data.copy()
+
+ ball_tree = BallTree(labeled[:, :3], metric='euclidean')
+
+ knn_classes = labeled[ball_tree.query(to_be_predicted[:, :3], k=k)[1]][:, :, -1].astype(int)
+
+ interpolated = np.zeros(knn_classes.shape[0])
+
+ for i in range(knn_classes.shape[0]):
+ interpolated[i] = np.bincount(knn_classes[i]).argmax()
+
+ output = np.zeros((to_be_predicted.shape[0], to_be_predicted.shape[1]+1))
+ output[:, :-1] = to_be_predicted
+
+ output[:, -1] = interpolated
+
+ return output
+
+
+def calculate_scores(cumulated_result: Dict[str, np.ndarray], num_classes: int):
+ """
+ calculate evaluation metrics of the prediction
+ :param cumulated_result: cumulated_result: last column = predicted label; last but one column = ground truth
+ :param num_classes: number of distinct classes of the dataset
+ :return: class_acc, class_iou, overall_acc
+ """
+ total_seen_from_class = [0 for _ in range(num_classes)]
+ total_pred_from_class = [0 for _ in range(num_classes)]
+ total_correct_from_class = [0 for _ in range(num_classes)]
+
+ for key in cumulated_result.keys():
+ for class_id in range(num_classes):
+ total_seen_from_class[class_id] += (cumulated_result[key][:, -2] == class_id).sum()
+ total_pred_from_class[class_id] += (cumulated_result[key][:, -1] == class_id).sum()
+
+ total_correct_from_class[class_id] += \
+ np.logical_and((cumulated_result[key][:, -2] == class_id),
+ (cumulated_result[key][:, -1] == class_id)).sum()
+
+ class_acc = [total_correct_from_class[i] / total_seen_from_class[i] for i in range(num_classes)]
+
+ class_iou = [total_correct_from_class[i] /
+ (total_seen_from_class[i] + total_pred_from_class[i] - total_correct_from_class[i])
+ for i in range(num_classes)]
+
+ overall_acc = sum(total_correct_from_class) / sum(total_seen_from_class)
+
+ return class_acc, class_iou, overall_acc
+
+
+def save_pc_as_obj(cumulated_result: Dict[str, np.ndarray], label_colors: np.ndarray, save_dir: str):
+ """
+ save pointclouds as obj files for later inspection with meshlab
+ :param cumulated_result: cumulated_result: last column = predicted label; last but one column = ground truth
+ :param label_colors: npy array containing the color information for each label class
+ :param save_dir: directory to save obj point clouds
+ :return: None
+ """
+ pointclouds_path = save_dir + '/pointclouds'
+
+ for key in tqdm(cumulated_result.keys(), desc='Save obj point clouds to disk'):
+ if not os.path.exists(pointclouds_path):
+ os.makedirs(pointclouds_path)
+
+ # Save predicted point clouds as obj files for later inspection using meshlab
+ fout = open(f"{pointclouds_path}/{key}_pred.obj", 'w')
+ pointcloud = cumulated_result[key]
+ for j in range(pointcloud.shape[0]):
+ color = label_colors[pointcloud[j, -1].astype(int)]
+ fout.write(f"v {str(pointcloud[j, 0]).replace('.', ',')}"
+ f" {str(pointcloud[j, 1]).replace('.', ',')}"
+ f" {str(pointcloud[j, 2]).replace('.', ',')}"
+ f" {color[0]} {color[1]} {color[2]}\n")
+ fout.close()
+
+
+def save_npy_results(cumulated_result: Dict[str, np.ndarray], save_dir: str):
+ """
+ save cumulated results to disk
+ :param cumulated_result: last column = predicted label; last but one column = ground truth
+ :param save_dir: directory to save npy arrays
+ :return: None
+ """
+ results_npy_path = save_dir + '/results_npy'
+
+ for key in tqdm(cumulated_result.keys(), desc='Save npy results to disk'):
+ if not os.path.exists(results_npy_path):
+ os.makedirs(results_npy_path)
+
+ np.save(f"{results_npy_path}/{key}", cumulated_result[key])
diff --git a/tools/lazy_decorator.py b/tools/lazy_decorator.py
new file mode 100644
index 0000000..ee64bb4
--- /dev/null
+++ b/tools/lazy_decorator.py
@@ -0,0 +1,42 @@
+"""
+Useful tool in order to access cached version of properties and functions which have to be executed just once
+(especially useful for building up the tensorflow computation graph)
+adapted from: https://stevenloria.com/lazy-properties/ and
+https://danijar.com/structuring-your-tensorflow-models/
+"""
+import functools
+
+
+def lazy_property(function):
+ """
+ caches the output of the property and just returns the value for next calls
+ :param function: property to be cached
+ :return: cached output of property
+ """
+ attribute = '_cache_' + function.__name__
+
+ @property
+ @functools.wraps(function)
+ def decorator(self):
+ if not hasattr(self, attribute):
+ setattr(self, attribute, function(self))
+ return getattr(self, attribute)
+
+ return decorator
+
+
+def lazy_function(function):
+ """
+ caches the output of the function and just returns the value for next calls
+ :param function: function to be cached
+ :return: cached output of function
+ """
+ attribute = '_cache_' + function.__name__
+
+ @functools.wraps(function)
+ def decorator(self):
+ if not hasattr(self, attribute):
+ setattr(self, attribute, function(self))
+ return getattr(self, attribute)
+
+ return decorator
diff --git a/tools/meta/class_names.txt b/tools/meta/class_names.txt
new file mode 100644
index 0000000..ca1d178
--- /dev/null
+++ b/tools/meta/class_names.txt
@@ -0,0 +1,13 @@
+ceiling
+floor
+wall
+beam
+column
+window
+door
+table
+chair
+sofa
+bookcase
+board
+clutter
diff --git a/tools/prepare_s3dis.py b/tools/prepare_s3dis.py
new file mode 100644
index 0000000..dc576d3
--- /dev/null
+++ b/tools/prepare_s3dis.py
@@ -0,0 +1,75 @@
+"""
+adapted from https://github.com/charlesq34/pointnet
+"""
+
+import os
+import numpy as np
+import glob
+import argparse
+import tools
+from pathlib import Path
+from tqdm import tqdm
+
+
+def collect_point_label(anno_path, out_filename):
+ """ Convert original dataset files to data_label file (each line is XYZRGBL).
+ We aggregated all the points from each instance in the room.
+
+ Args:
+ anno_path: path to annotations. e.g. Area_1/office_2/Annotations/
+ out_filename: path to save collected points and labels (each line is XYZRGBL)
+ Returns:
+ None
+ Note:
+ the points are shifted before save, the most negative point is now at origin.
+ """
+
+ g_classes = [x.rstrip() for x in open('meta/class_names.txt')]
+ g_class2label = {cls: i for i, cls in enumerate(g_classes)}
+
+ points_list = []
+
+ for f in glob.glob(os.path.join(anno_path, '*.txt')):
+ cls = os.path.basename(f).split('_')[0]
+ if cls not in g_classes: # note: in some room there is 'staris' class..
+ cls = 'clutter'
+ points = np.loadtxt(f)
+ labels = np.ones((points.shape[0], 1)) * g_class2label[cls]
+ points_list.append(np.concatenate([points, labels], 1)) # Nx7
+
+ data_label = np.concatenate(points_list, 0)
+ xyz_min = np.amin(data_label, axis=0)[0:3]
+ data_label[:, 0:3] -= xyz_min
+ data_label = data_label.astype(dtype=np.float32)
+
+ output_folder = Path(os.path.dirname(out_filename))
+ output_folder.mkdir(parents=True, exist_ok=True)
+
+ np.save(out_filename, data_label)
+
+
+def main(params):
+ anno_paths = [x[0] for x in os.walk(params.input_dir) if x[0].endswith('Annotations')]
+
+ # Note: there is an extra character in the v1.2 data in Area_5/hallway_6. It's fixed manually.
+ for anno_path in tqdm(anno_paths):
+ elements = anno_path.split('/')
+ out_filename = elements[-3] + '_' + elements[-2] + '.npy' # Area_1_hallway_1.npy
+ try:
+ collect_point_label(anno_path, os.path.join(params.output_dir, elements[-3], 'full_size', out_filename))
+ except Exception as e:
+ print(str(e))
+ print(out_filename)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Convert original S3DIS dataset to npy based file format used'
+ 'by our framework')
+
+ parser.add_argument('--input_dir', required=True, help='root directory of original data')
+ parser.add_argument('--output_dir', required=True, help='root directory of output npys')
+ params = parser.parse_args()
+
+ tools.pretty_print_arguments(params)
+
+ main(params)
diff --git a/tools/tf_util.py b/tools/tf_util.py
new file mode 100644
index 0000000..57935ba
--- /dev/null
+++ b/tools/tf_util.py
@@ -0,0 +1,625 @@
+""" Wrapper functions for TensorFlow layers.
+
+Author: Charles R. Qi
+Date: November 2016
+"""
+
+import tensorflow as tf
+
+
+def gru_seq_g(inputs, n_units, dropout, scope):
+ """
+
+ :param inputs: (BS*N, K, F)
+ :param n_units: (F) size of GRU cell
+ :param dropout: BOOL
+ :param scope:
+ :return:
+ """
+ with tf.variable_scope(scope) as sc:
+
+ # create gru cell
+ gru = tf.nn.rnn_cell.GRUCell(n_units, reuse=tf.AUTO_REUSE)
+ if dropout:
+ gru = tf.nn.rnn_cell.DropoutWrapper(gru, output_keep_prob=0.5)
+
+ input = inputs[:, 0, :] # 0th neighbours is point itself
+ state = input
+ output, state = gru(input, state)
+
+ outputs = []
+ for i in range(0, inputs.shape[1]): # iterate over all neighbours
+ input = inputs[:, i, :]
+ output, state = gru(input, state) # shape: (BS*N, F), (BS,*N, F)
+ outputs.append(output)
+
+ outputs = tf.stack(outputs, axis=1) # shape: (BS*N, K, F)
+ return output
+
+
+def _variable_on_cpu(name, shape, initializer, use_fp16=False):
+ """Helper to create a Variable stored on CPU memory.
+ Args:
+ name: name of the variable
+ shape: list of ints
+ initializer: initializer for Variable
+ Returns:
+ Variable Tensor
+ """
+ with tf.device('/cpu:0'):
+ dtype = tf.float16 if use_fp16 else tf.float32
+ var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
+ return var
+
+
+def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):
+ """Helper to create an initialized Variable with weight decay.
+
+ Note that the Variable is initialized with a truncated normal distribution.
+ A weight decay is added only if one is specified.
+
+ Args:
+ name: name of the variable
+ shape: list of ints
+ stddev: standard deviation of a truncated Gaussian
+ wd: add L2Loss weight decay multiplied by this float. If None, weight
+ decay is not added for this Variable.
+ use_xavier: bool, whether to use xavier initializer
+
+ Returns:
+ Variable Tensor
+ """
+ if use_xavier:
+ initializer = tf.contrib.layers.xavier_initializer()
+ else:
+ initializer = tf.truncated_normal_initializer(stddev=stddev)
+ var = _variable_on_cpu(name, shape, initializer)
+ if wd is not None:
+ weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
+ tf.add_to_collection('losses', weight_decay)
+ return var
+
+
+def consolidation_unit(inputs,
+ size,
+ scope,
+ bn=False,
+ bn_decay=None,
+ is_training=None):
+ with tf.variable_scope(scope) as sc:
+ net = conv2d(inputs, size, [1, 1], padding='VALID', stride=[1, 1], bn=bn, is_training=is_training,
+ scope=scope + '/conv', bn_decay=bn_decay)
+
+ net_pooled = tf.reduce_max(net, axis=1, keep_dims=True, name=scope + '/global_feature')
+ net_repeated = tf.tile(net_pooled, [1, tf.shape(net)[1], 1, 1], name='repeat')
+ net = tf.concat(values=[net, net_repeated], axis=3) # put net and global features next to each other
+
+ return net
+
+
+def conv2d(inputs,
+ num_output_channels,
+ kernel_size,
+ scope,
+ stride=[1, 1],
+ padding='SAME',
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bn_decay=None,
+ is_training=None):
+ """ 2D convolution with non-linear operation.
+
+ Args:
+ inputs: 4-D tensor variable BxHxWxC
+ num_output_channels: int
+ kernel_size: a list of 2 ints
+ scope: string
+ stride: a list of 2 ints
+ padding: 'SAME' or 'VALID'
+ use_xavier: bool, use xavier_initializer if true
+ stddev: float, stddev for truncated_normal init
+ weight_decay: float
+ activation_fn: function
+ bn: bool, whether to use batch norm
+ bn_decay: float or float tensor variable in [0,1]
+ is_training: bool Tensor variable
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_h, kernel_w = kernel_size
+ num_in_channels = inputs.get_shape()[-1].value
+ kernel_shape = [kernel_h, kernel_w,
+ num_in_channels, num_output_channels]
+ kernel = _variable_with_weight_decay('weights',
+ shape=kernel_shape,
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ stride_h, stride_w = stride
+ outputs = tf.nn.conv2d(inputs, kernel,
+ [1, stride_h, stride_w, 1],
+ padding=padding)
+ biases = _variable_on_cpu('biases', [num_output_channels],
+ tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ if bn:
+ outputs = batch_norm_for_conv2d(outputs, is_training,
+ bn_decay=bn_decay, scope='bn')
+
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def conv2d_transpose(inputs,
+ num_output_channels,
+ kernel_size,
+ scope,
+ stride=[1, 1],
+ padding='SAME',
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bn_decay=None,
+ is_training=None):
+ """ 2D convolution transpose with non-linear operation.
+
+ Args:
+ inputs: 4-D tensor variable BxHxWxC
+ num_output_channels: int
+ kernel_size: a list of 2 ints
+ scope: string
+ stride: a list of 2 ints
+ padding: 'SAME' or 'VALID'
+ use_xavier: bool, use xavier_initializer if true
+ stddev: float, stddev for truncated_normal init
+ weight_decay: float
+ activation_fn: function
+ bn: bool, whether to use batch norm
+ bn_decay: float or float tensor variable in [0,1]
+ is_training: bool Tensor variable
+
+ Returns:
+ Variable tensor
+
+ Note: conv2d(conv2d_transpose(a, num_out, ksize, stride), a.shape[-1], ksize, stride) == a
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_h, kernel_w = kernel_size
+ num_in_channels = inputs.get_shape()[-1].value
+ kernel_shape = [kernel_h, kernel_w,
+ num_output_channels, num_in_channels] # reversed to conv2d
+ kernel = _variable_with_weight_decay('weights',
+ shape=kernel_shape,
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ stride_h, stride_w = stride
+
+ # from slim.convolution2d_transpose
+ def get_deconv_dim(dim_size, stride_size, kernel_size, padding):
+ dim_size *= stride_size
+
+ if padding == 'VALID' and dim_size is not None:
+ dim_size += max(kernel_size - stride_size, 0)
+ return dim_size
+
+ # caculate output shape
+ batch_size = inputs.get_shape()[0].value
+ height = inputs.get_shape()[1].value
+ width = inputs.get_shape()[2].value
+ out_height = get_deconv_dim(height, stride_h, kernel_h, padding)
+ out_width = get_deconv_dim(width, stride_w, kernel_w, padding)
+ output_shape = [batch_size, out_height, out_width, num_output_channels]
+
+ outputs = tf.nn.conv2d_transpose(inputs, kernel, output_shape,
+ [1, stride_h, stride_w, 1],
+ padding=padding)
+ biases = _variable_on_cpu('biases', [num_output_channels],
+ tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ if bn:
+ outputs = batch_norm_for_conv2d(outputs, is_training,
+ bn_decay=bn_decay, scope='bn')
+
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def gru_seq(inputs, n_units, batch_size, TIME, dropout, scope):
+ with tf.variable_scope(scope) as _:
+
+ gru = tf.contrib.rnn.GRUCell(n_units)
+ if dropout:
+ gru = tf.nn.rnn_cell.DropoutWrapper(gru, output_keep_prob=0.5)
+
+ # print inputs.shape
+ input = inputs[:, 0, :]
+ state = input
+ output, state = gru(input, state)
+
+ for i in range(1,inputs.shape[1]):
+ input = inputs[:, i, :]
+ output, state = gru(input, state)
+
+ input = inputs[:, 0, :]
+ output, state = gru(input, state)
+ outputs = output
+ for i in range(1, inputs.shape[1]):
+ input = inputs[:, i, :]
+ input, state = gru(input, state)
+ outputs = tf.concat([outputs, output], axis=1)
+
+ output = tf.reshape(outputs, (batch_size, TIME, n_units))
+
+ return output
+
+
+def gru_noseq(inputs,
+ n_units,
+ num_layers,
+ batch_size, TIME,dropout,scope):
+ with tf.variable_scope(scope) as sc:
+
+ gru = tf.contrib.rnn.GRUCell(n_units)
+ if dropout:
+ gru = tf.nn.rnn_cell.DropoutWrapper(gru, output_keep_prob=0.5)
+ #gru = tf.nn.rnn_cell.MultiRNNCell([gru] * num_layers)
+
+
+ eol = tf.ones(inputs[:,0,:].shape)
+
+ #print inputs.shape
+ input = inputs[:,0,:]
+ state = input
+ output, state = gru(input, state)
+ #outputs = output
+ for i in range(1,inputs.shape[1]):
+ input = inputs[:,i,:]
+ output,state=gru(input,state)
+ outputs = output
+ for i in range(1,inputs.shape[1]):
+ outputs=tf.concat([outputs,output],axis=1)
+ #input = inputs[:, 0, :]
+ #output,state = gru(input,state)
+ #outputs = output
+ #for i in range(1,inputs.shape[1]):
+ # input = inputs[:, i, :]
+ # input,state=gru(input,state)
+ # output = tf.clip_by_value(output,0,100)
+ # outputs = tf.concat([outputs, output], axis=1)
+ #print output.shape
+ output = tf.reshape(outputs, (batch_size / TIME, TIME, n_units))
+
+ return output
+
+
+def conv3d(inputs,
+ num_output_channels,
+ kernel_size,
+ scope,
+ stride=[1, 1, 1],
+ padding='SAME',
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bn_decay=None,
+ is_training=None):
+ """ 3D convolution with non-linear operation.
+
+ Args:
+ inputs: 5-D tensor variable BxDxHxWxC
+ num_output_channels: int
+ kernel_size: a list of 3 ints
+ scope: string
+ stride: a list of 3 ints
+ padding: 'SAME' or 'VALID'
+ use_xavier: bool, use xavier_initializer if true
+ stddev: float, stddev for truncated_normal init
+ weight_decay: float
+ activation_fn: function
+ bn: bool, whether to use batch norm
+ bn_decay: float or float tensor variable in [0,1]
+ is_training: bool Tensor variable
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_d, kernel_h, kernel_w = kernel_size
+ num_in_channels = inputs.get_shape()[-1].value
+ kernel_shape = [kernel_d, kernel_h, kernel_w,
+ num_in_channels, num_output_channels]
+ kernel = _variable_with_weight_decay('weights',
+ shape=kernel_shape,
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ stride_d, stride_h, stride_w = stride
+ outputs = tf.nn.conv3d(inputs, kernel,
+ [1, stride_d, stride_h, stride_w, 1],
+ padding=padding)
+ biases = _variable_on_cpu('biases', [num_output_channels],
+ tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ if bn:
+ outputs = batch_norm_for_conv3d(outputs, is_training,
+ bn_decay=bn_decay, scope='bn')
+
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def fully_connected(inputs,
+ num_outputs,
+ scope,
+ use_xavier=True,
+ stddev=1e-3,
+ weight_decay=0.0,
+ activation_fn=tf.nn.relu,
+ bn=False,
+ bn_decay=None,
+ is_training=None):
+ """ Fully connected layer with non-linear operation.
+
+ Args:
+ inputs: 2-D tensor BxN
+ num_outputs: int
+
+ Returns:
+ Variable tensor of size B x num_outputs.
+ """
+ with tf.variable_scope(scope) as sc:
+ num_input_units = inputs.get_shape()[-1].value
+ weights = _variable_with_weight_decay('weights',
+ shape=[num_input_units, num_outputs],
+ use_xavier=use_xavier,
+ stddev=stddev,
+ wd=weight_decay)
+ outputs = tf.matmul(inputs, weights)
+ biases = _variable_on_cpu('biases', [num_outputs],
+ tf.constant_initializer(0.0))
+ outputs = tf.nn.bias_add(outputs, biases)
+
+ if bn:
+ outputs = batch_norm_for_fc(outputs, is_training, bn_decay, 'bn')
+
+ if activation_fn is not None:
+ outputs = activation_fn(outputs)
+ return outputs
+
+
+def max_pool2d(inputs,
+ kernel_size,
+ scope,
+ stride=[2, 2],
+ padding='VALID'):
+ """ 2D max pooling.
+
+ Args:
+ inputs: 4-D tensor BxHxWxC
+ kernel_size: a list of 2 ints
+ stride: a list of 2 ints
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_h, kernel_w = kernel_size
+ stride_h, stride_w = stride
+ outputs = tf.nn.max_pool(inputs,
+ ksize=[1, kernel_h, kernel_w, 1],
+ strides=[1, stride_h, stride_w, 1],
+ padding=padding,
+ name=sc.name)
+ return outputs
+
+def avg_pool2d(inputs,
+ kernel_size,
+ scope,
+ stride=[2, 2],
+ padding='VALID'):
+ """ 2D avg pooling.
+
+ Args:
+ inputs: 4-D tensor BxHxWxC
+ kernel_size: a list of 2 ints
+ stride: a list of 2 ints
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_h, kernel_w = kernel_size
+ stride_h, stride_w = stride
+ outputs = tf.nn.avg_pool(inputs,
+ ksize=[1, kernel_h, kernel_w, 1],
+ strides=[1, stride_h, stride_w, 1],
+ padding=padding,
+ name=sc.name)
+ return outputs
+
+
+def max_pool3d(inputs,
+ kernel_size,
+ scope,
+ stride=[2, 2, 2],
+ padding='VALID'):
+ """ 3D max pooling.
+
+ Args:
+ inputs: 5-D tensor BxDxHxWxC
+ kernel_size: a list of 3 ints
+ stride: a list of 3 ints
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_d, kernel_h, kernel_w = kernel_size
+ stride_d, stride_h, stride_w = stride
+ outputs = tf.nn.max_pool3d(inputs,
+ ksize=[1, kernel_d, kernel_h, kernel_w, 1],
+ strides=[1, stride_d, stride_h, stride_w, 1],
+ padding=padding,
+ name=sc.name)
+ return outputs
+
+def avg_pool3d(inputs,
+ kernel_size,
+ scope,
+ stride=[2, 2, 2],
+ padding='VALID'):
+ """ 3D avg pooling.
+
+ Args:
+ inputs: 5-D tensor BxDxHxWxC
+ kernel_size: a list of 3 ints
+ stride: a list of 3 ints
+
+ Returns:
+ Variable tensor
+ """
+ with tf.variable_scope(scope) as sc:
+ kernel_d, kernel_h, kernel_w = kernel_size
+ stride_d, stride_h, stride_w = stride
+ outputs = tf.nn.avg_pool3d(inputs,
+ ksize=[1, kernel_d, kernel_h, kernel_w, 1],
+ strides=[1, stride_d, stride_h, stride_w, 1],
+ padding=padding,
+ name=sc.name)
+ return outputs
+
+
+def batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay):
+ """ Batch normalization on convolutional maps and beyond...
+ Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
+
+ Args:
+ inputs: Tensor, k-D input ... x C could be BC or BHWC or BDHWC
+ is_training: boolean tf.Varialbe, true indicates training phase
+ scope: string, variable scope
+ moments_dims: a list of ints, indicating dimensions for moments calculation
+ bn_decay: float or float tensor variable, controling moving average weight
+ Return:
+ normed: batch-normalized maps
+ """
+ with tf.variable_scope(scope) as sc:
+ num_channels = inputs.get_shape()[-1].value
+ beta = tf.Variable(tf.constant(0.0, shape=[num_channels]),
+ name='beta', trainable=True)
+ gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]),
+ name='gamma', trainable=True)
+ batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments')
+ decay = bn_decay if bn_decay is not None else 0.9
+ ema = tf.train.ExponentialMovingAverage(decay=decay)
+ # Operator that maintains moving averages of variables.
+ ema_apply_op = tf.cond(is_training,
+ lambda: ema.apply([batch_mean, batch_var]),
+ lambda: tf.no_op())
+
+ # Update moving average and return current batch's avg and var.
+ def mean_var_with_update():
+ with tf.control_dependencies([ema_apply_op]):
+ return tf.identity(batch_mean), tf.identity(batch_var)
+
+ # ema.average returns the Variable holding the average of var.
+ mean, var = tf.cond(is_training,
+ mean_var_with_update,
+ lambda: (ema.average(batch_mean), ema.average(batch_var)))
+ normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)
+ return normed
+
+
+def batch_norm_for_fc(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on FC data.
+
+ Args:
+ inputs: Tensor, 2D BxC input
+ is_training: boolean tf.Varialbe, true indicates training phase
+ bn_decay: float or float tensor variable, controling moving average weight
+ scope: string, variable scope
+ Return:
+ normed: batch-normalized maps
+ """
+ return batch_norm_template(inputs, is_training, scope, [0,], bn_decay)
+
+
+def batch_norm_for_conv1d(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on 1D convolutional maps.
+
+ Args:
+ inputs: Tensor, 3D BLC input maps
+ is_training: boolean tf.Varialbe, true indicates training phase
+ bn_decay: float or float tensor variable, controling moving average weight
+ scope: string, variable scope
+ Return:
+ normed: batch-normalized maps
+ """
+ return batch_norm_template(inputs, is_training, scope, [0,1], bn_decay)
+
+
+def batch_norm_for_conv2d(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on 2D convolutional maps.
+
+ Args:
+ inputs: Tensor, 4D BHWC input maps
+ is_training: boolean tf.Varialbe, true indicates training phase
+ bn_decay: float or float tensor variable, controling moving average weight
+ scope: string, variable scope
+ Return:
+ normed: batch-normalized maps
+ """
+ return batch_norm_template(inputs, is_training, scope, [0,1,2], bn_decay)
+
+
+def batch_norm_for_conv3d(inputs, is_training, bn_decay, scope):
+ """ Batch normalization on 3D convolutional maps.
+
+ Args:
+ inputs: Tensor, 5D BDHWC input maps
+ is_training: boolean tf.Varialbe, true indicates training phase
+ bn_decay: float or float tensor variable, controling moving average weight
+ scope: string, variable scope
+ Return:
+ normed: batch-normalized maps
+ """
+ return batch_norm_template(inputs, is_training, scope, [0,1,2,3], bn_decay)
+
+
+def dropout(inputs,
+ is_training,
+ scope,
+ keep_prob=0.5,
+ noise_shape=None):
+ """ Dropout layer.
+
+ Args:
+ inputs: tensor
+ is_training: boolean tf.Variable
+ scope: string
+ keep_prob: float in [0,1]
+ noise_shape: list of ints
+
+ Returns:
+ tensor variable
+ """
+ with tf.variable_scope(scope) as sc:
+ outputs = tf.cond(is_training,
+ lambda: tf.nn.dropout(inputs, keep_prob, noise_shape),
+ lambda: inputs)
+ return outputs
diff --git a/tools/tools.py b/tools/tools.py
new file mode 100644
index 0000000..512b867
--- /dev/null
+++ b/tools/tools.py
@@ -0,0 +1,127 @@
+"""
+various helper functions to make life easier
+"""
+
+import time
+import subprocess
+import os
+from pathlib import Path
+import sys
+from datetime import datetime
+import logging
+import re
+import string
+import random
+
+
+def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
+ return ''.join(random.choice(chars) for _ in range(size))
+
+
+def us2mc(x):
+ """
+ underscore to mixed-case notation
+ from https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch03s16.html
+ """
+ return re.sub(r'_([a-z])', lambda m: (m.group(1).upper()), x)
+
+
+def us2cw(x):
+ """
+ underscore to capwords notation
+ from https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch03s16.html
+ """
+ s = us2mc(x)
+ return s[0].upper()+s[1:]
+
+
+def import_class(package_path, class_name):
+ """
+ dynamic import of a class from a given package
+ :param package_path: path to the package
+ :param class_name: class to be dynamically loaded
+ :return: dynamically loaded class
+ """
+ try:
+ logging.info(f"Loading {package_path}.{class_name} ...")
+ module = __import__(f"{package_path}.{class_name}", fromlist=[class_name])
+ return getattr(module, us2cw(class_name))
+ except ModuleNotFoundError as exc:
+ logging.error(f"{package_path}.{class_name} could not be found")
+ exit(1)
+
+
+def setup_logger():
+ """
+ setup the logging mechanism where log messages are saved in time-encoded txt files as well as to the terminal
+ :return: directory path in which logs are saved
+ """
+ directory_path = f"logs/{datetime.now():%Y-%m-%d@%H:%M:%S}_{id_generator()}"
+
+ path = Path(directory_path)
+ path.mkdir(parents=True, exist_ok=True)
+
+ log_format = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] [%(pathname)s:%(lineno)04d] %(message)s")
+ logger = logging.getLogger()
+ logger.setLevel(logging.DEBUG)
+
+ file_handler = logging.FileHandler(f"{directory_path}/{datetime.now():%Y-%m-%d@%H:%M:%S}_{id_generator()}.log")
+ file_handler.setFormatter(log_format)
+ logger.addHandler(file_handler)
+
+ file_handler = logging.StreamHandler(sys.stdout)
+ file_handler.setFormatter(log_format)
+ logger.addHandler(file_handler)
+
+ logging.root = logger
+
+ logging.info('START LOGGING')
+ logging.info(f"Current Git Version: {git_version()}")
+
+ return directory_path
+
+
+def pretty_print_arguments(args):
+ """
+ return a nicely formatted list of passed arguments
+ :param args: arguments passed to the program via terminal
+ :return: None
+ """
+ longest_key = max([len(key) for key in vars(args)])
+
+ print('Program was launched with the following arguments:')
+
+ for key, item in vars(args).items():
+ print("~ {0:{s}} \t {1}".format(key, item, s=longest_key))
+
+ print('')
+ # Wait a bit until program execution continues
+ time.sleep(0.1)
+
+
+def git_version():
+ """
+ return git revision such that it can be also logged to keep track of results
+ :return: git revision hash
+ """
+ def _minimal_ext_cmd(cmd):
+ # construct minimal environment
+ env = {}
+ for k in ['SYSTEMROOT', 'PATH']:
+ v = os.environ.get(k)
+ if v is not None:
+ env[k] = v
+ # LANGUAGE is used on win32
+ env['LANGUAGE'] = 'C'
+ env['LANG'] = 'C'
+ env['LC_ALL'] = 'C'
+ out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
+ return out
+
+ try:
+ out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
+ git_revision = out.strip().decode('ascii')
+ except OSError:
+ git_revision = "Unknown"
+
+ return git_revision
diff --git a/tools/viz.py b/tools/viz.py
new file mode 100644
index 0000000..38c7e15
--- /dev/null
+++ b/tools/viz.py
@@ -0,0 +1,215 @@
+import vtk
+import numpy as np
+import random
+
+print('Using', vtk.vtkVersion.GetVTKSourceVersion())
+
+
+class MyInteractorStyle(vtk.vtkInteractorStyleTrackballCamera):
+ def __init__(self, parent, pointcloud):
+ self.parent = parent
+ self.pointcloud = pointcloud
+ self.AddObserver("KeyPressEvent", self.keyPressEvent)
+
+ def keyPressEvent(self, obj, event):
+ key = self.parent.GetKeySym()
+ if key == '+':
+ point_size = self.pointcloud.vtkActor.GetProperty().GetPointSize()
+ self.pointcloud.vtkActor.GetProperty().SetPointSize(point_size + 1)
+ print(str(point_size) + " " + key)
+ return
+
+
+class VtkPointCloud:
+
+ def __init__(self, point_size=18, maxNumPoints=1e8):
+ self.maxNumPoints = maxNumPoints
+ self.vtkPolyData = vtk.vtkPolyData()
+ self.clear_points()
+
+ self.colors = vtk.vtkUnsignedCharArray()
+ self.colors.SetNumberOfComponents(3)
+ self.colors.SetName("Colors")
+
+ mapper = vtk.vtkPolyDataMapper()
+ mapper.SetInputData(self.vtkPolyData)
+
+ self.vtkActor = vtk.vtkActor()
+ self.vtkActor.SetMapper(mapper)
+ self.vtkActor.GetProperty().SetPointSize(point_size)
+
+ def add_point(self, point, color):
+ if self.vtkPoints.GetNumberOfPoints() < self.maxNumPoints:
+ pointId = self.vtkPoints.InsertNextPoint(point[:])
+ self.colors.InsertNextTuple(color)
+ self.vtkDepth.InsertNextValue(point[2])
+ self.vtkCells.InsertNextCell(1)
+ self.vtkCells.InsertCellPoint(pointId)
+ else:
+ print("VIZ: Reached max number of points!")
+ r = random.randint(0, self.maxNumPoints)
+ self.vtkPoints.SetPoint(r, point[:])
+ self.vtkPolyData.GetPointData().SetScalars(self.colors)
+ self.vtkCells.Modified()
+ self.vtkPoints.Modified()
+ self.vtkDepth.Modified()
+
+ def clear_points(self):
+ self.vtkPoints = vtk.vtkPoints()
+ self.vtkCells = vtk.vtkCellArray()
+ self.vtkDepth = vtk.vtkDoubleArray()
+ self.vtkDepth.SetName('DepthArray')
+ self.vtkPolyData.SetPoints(self.vtkPoints)
+ self.vtkPolyData.SetVerts(self.vtkCells)
+ self.vtkPolyData.GetPointData().SetScalars(self.vtkDepth)
+ self.vtkPolyData.GetPointData().SetActiveScalars('DepthArray')
+
+
+def getActorCircle(radius_inner=100, radius_outer=99, color=(1,0,0)):
+ """"""
+ # create source
+ source = vtk.vtkDiskSource()
+ source.SetInnerRadius(radius_inner)
+ source.SetOuterRadius(radius_outer)
+ source.SetRadialResolution(100)
+ source.SetCircumferentialResolution(100)
+
+ # Transformer
+ transform = vtk.vtkTransform()
+ transform.RotateWXYZ(90, 1, 0, 0)
+ transformFilter = vtk.vtkTransformPolyDataFilter()
+ transformFilter.SetTransform(transform)
+ transformFilter.SetInputConnection(source.GetOutputPort())
+ transformFilter.Update()
+
+ # mapper
+ mapper = vtk.vtkPolyDataMapper()
+ mapper.SetInputConnection(transformFilter.GetOutputPort())
+
+ # actor
+ actor = vtk.vtkActor()
+ actor.GetProperty().SetColor(color)
+ actor.SetMapper(mapper)
+
+ return actor
+
+
+def show_pointclouds(points, colors, text=[], title="Default", png_path="", interactive=True):
+ """
+ Show multiple point clouds specified as lists. First clouds at the bottom.
+ :param points: list of pointclouds, item: numpy (N x 3) XYZ
+ :param colors: list of corresponding colors, item: numpy (N x 3) RGB [0..255]
+ :param title: window title
+ :param text: text per point cloud
+ :param png_path: where to save png image
+ :param interactive: wether to display window or not, useful if you only want to take screenshot
+ :return: nothing
+ """
+
+ # make sure pointclouds is a list
+ assert isinstance(points, type([])), \
+ "Pointclouds argument must be a list"
+
+ # make sure colors is a list
+ assert isinstance(colors, type([])), \
+ "Colors argument must be a list"
+
+ # make sure number of pointclouds and colors are the same
+ assert len(points) == len(colors), \
+ "Number of pointclouds (%d) is different then number of colors (%d)" % (len(points), len(colors))
+
+ while len(text)