* Image rotator for extract and convert ready for testing
* Revert "Image rotator for extract and convert ready for testing"
This reverts commit bbeb19ef26.
Error in extract code
* add image rotation support to detect more faces
* Update convert.py
Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.
* Update convert.py
remove type
* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once
* Changed command line flag to take arguments to ease future development
* Realigning for upstream/Master
* Minor fix
* Add negative filters for face detection
When detecting faces that are very similar, the face recognition can
produce positive results for similar looking people. This commit allows
the user to add multiple positive and negative reference images. The
facedetection then calculates the distance to each reference image
and tries to guess which is more likely using the k-nearest method.
* Do not calculate knn if no negative images are given
* Clean up outputting
* Clearer requirements for each platform
* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups
* Adding GAN128
* Update GAN to v2
* Create instance_normalization.py
* Fix decoder output
* Revert "Fix decoder output"
This reverts commit 3a8ecb8957.
* Fix convert
* Enable all options except perceptual_loss by default
* Disable instance norm
* Update Model.py
* Update Trainer.py
* Match GAN128 to shaoanlu's latest v2
* Add first_order to GAN128
* Disable `use_perceptual_loss`
* Fix call to `self.first_order`
* Switch to average loss in output
* Constrain average to last 100 iterations
* Fix math, constrain average to intervals of 100
* Fix math averaging again
* Remove math and simplify this damn averagin
* Add gan128 conversion
* Update convert.py
* Use non-warped images in masked preview
* Add K.set_learning_phase(1) to gan64
* Add K.set_learning_phase(1) to gan128
* Add missing keras import
* Use non-warped images in masked preview for gan128
* Exclude deleted faces from conversion
* --input-aligned-dir defaults to "{input_dir}/aligned"
* Simplify map operation
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.
2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor
fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error
if face location not found by CNN, its try to find by HOG.
removed this:
- if face.landmarks == None:
- print("Warning! landmarks not found. Switching to crop!")
- return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks
* Enabled masked converter for GAN models
* Histogram matching, cli option for perceptual loss
* Fix init() positional args error
* Add backwards compatibility for aligned filenames
* Fix masked converter
* Remove GAN converters
* Pytorch and face-alignment
* Skip processed frames when extracting faces.
* Reset to master version
* Reset to master
* Added --skip-existing argument to Extract script. Default is to NOT skip already processed frames.
Added logic to write_alignments to append new alignments (and preserve existing ones)
to existing alignments file when the skip-existing option is used.
* Fixed exception for --skip-existing when using the convert script
* Sync with upstream
* Fixed error when using Convert script.
* Bug fix
* Merges alignments only if --skip-existing is used.
* Creates output dir when not found, even when using --skip-existing.
* Preparing GAN plugin
* Adding multithreading for extract
* Adding support for mmod human face detector
* Adding face filter argument
* Added process number argument to multiprocessing extractor.
Fixed progressbar for multiprocessing.
* Added tiff as image type.
compression artefacts hurt my feelings.
* Cleanup
* Making Models as plugins
* Do not reload model on each image #39 + Adding FaceFilter #53
* Adding @lukaville PR for #43 and #44 (possibly)
* Training done in a separate thread
* Better log for plugin load
* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included
* correction of DirectoryProcessor signature
* adding missing import
* Convert with parallel preprocessing of files
* Added coverage var for trainer
Added a var with comment. Feel free to add it as argument
* corrections
* Modifying preview and normalization of image + correction
* Cleanup
* Improvement of the train action usability.
* Prints the current iteration number in verbose mode.
* The number of iterations before saving the data can be changed by a command line optin.
* Added option to write training result to file even when in preview.
* Prints time elapsed for each iteration in verbose mode when training.
* Created a single script to call the other ones.
Usage is ./faceswap.py {train|extract|convert}
* Improved the help from the commands.
* Added forgotten faceswap.py file.
* Changed gitignore to add the scripts.
* Updates gitignore.
* Added guarding not to execute code when imported.
* Removed useless script. Display help when no arguments are provided.
* Update README