This package proposes some classes to work with Keras (included in TensorFlow) that generates batches of frames from video files.
It is useful to work with Time Distributed Layer with Video input (frames). Then you may use GRU or LSTM. See articles:
Example can be seen in [integrated note book](./Example of usage.ipynb). To see the example in a proper viewer, you can use that URL.
Requirements are:
- Python3 (Python 2 will never been supported)
- OpenCV
- numpy
- Keras
- TensorFlow (or other backend, not tested)
If you want to compile the package, you need:
- sphinx to compile doc
- setuptools
You can install the package via pip
:
pip install keras-video-generators
If you want to build from sources, clone the repository then:
python setup.py build
The package contains 3 generators that inherits Sequence
interface. So they may be used with model.fit_generator()
:
VideoFrameGenerator
that will take the choosen number of frames from the entire videoSlidingFrameGenerator
that takes frames with decay for the entire video or with a sequence timeOpticalFlowGenerator
that gives optical flow sequence from frames with different methods
Each of these generators accepts parameters:
glob_pattern
that must contain{classname}
, e.g. './videos/{classname}/*.png' - the "classname" in string is used to detect classesnb_frame
that is the number of frame in the sequencebatch_size
transformation
that can beNone
or or ImageDataGenerator to make data augmentationuse_frame_cache
to use with caution, if set toTrue
, the class will keep frames in memory (without augmentation). You need a lot of memory- and many more, see class documentation
- Complete documentation
- use Optical Flow on SlidingFrameGenerator