* Clearer requirements for each platform
* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups
* Adding GAN128
* Update GAN to v2
* Create instance_normalization.py
* Fix decoder output
* Revert "Fix decoder output"
This reverts commit 3a8ecb8957.
* Fix convert
* Enable all options except perceptual_loss by default
* Disable instance norm
* Update Model.py
* Update Trainer.py
* Match GAN128 to shaoanlu's latest v2
* Add first_order to GAN128
* Disable `use_perceptual_loss`
* Fix call to `self.first_order`
* Switch to average loss in output
* Constrain average to last 100 iterations
* Fix math, constrain average to intervals of 100
* Fix math averaging again
* Remove math and simplify this damn averagin
* Add gan128 conversion
* Update convert.py
* Use non-warped images in masked preview
* Add K.set_learning_phase(1) to gan64
* Add K.set_learning_phase(1) to gan128
* Add missing keras import
* Use non-warped images in masked preview for gan128
* Exclude deleted faces from conversion
* --input-aligned-dir defaults to "{input_dir}/aligned"
* Simplify map operation
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.
2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor
fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error
if face location not found by CNN, its try to find by HOG.
removed this:
- if face.landmarks == None:
- print("Warning! landmarks not found. Switching to crop!")
- return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks
* Enabled masked converter for GAN models
* Histogram matching, cli option for perceptual loss
* Fix init() positional args error
* Add backwards compatibility for aligned filenames
* Fix masked converter
* Remove GAN converters
* Pytorch and face-alignment
* Skip processed frames when extracting faces.
* Reset to master version
* Reset to master
* Added --skip-existing argument to Extract script. Default is to NOT skip already processed frames.
Added logic to write_alignments to append new alignments (and preserve existing ones)
to existing alignments file when the skip-existing option is used.
* Fixed exception for --skip-existing when using the convert script
* Sync with upstream
* Fixed error when using Convert script.
* Bug fix
* Merges alignments only if --skip-existing is used.
* Creates output dir when not found, even when using --skip-existing.
* Preparing GAN plugin
* Adding multithreading for extract
* Adding support for mmod human face detector
* Adding face filter argument
* Added process number argument to multiprocessing extractor.
Fixed progressbar for multiprocessing.
* Added tiff as image type.
compression artefacts hurt my feelings.
* Cleanup
* Making Models as plugins
* Do not reload model on each image #39 + Adding FaceFilter #53
* Adding @lukaville PR for #43 and #44 (possibly)
* Training done in a separate thread
* Better log for plugin load
* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included
* correction of DirectoryProcessor signature
* adding missing import
* Convert with parallel preprocessing of files
* Added coverage var for trainer
Added a var with comment. Feel free to add it as argument
* corrections
* Modifying preview and normalization of image + correction
* Cleanup