* Sort images properly (to make the frame range parameter meaningfull)
* Add backup mechanism for model weights.
* Replace 'print' call by 'tqdm.write' call.
* Add support for user-specified rotation angle in extract
* Added rotation-angle-list option to enumerate a list of angles to rotate through
* Adjust rotation matrix translation coords to avoid cropping
* Merged rotation-angle and rotation-angle-list options into rotate_images option
* Backwards compatibility
* Updated check whether to run image rotator
* Switched rotation convention to use positive angle = clockwise rotation, for backwards compatibility
* Updated to support Output Sharpening arguments
Two new types of output sharpening methods have been added.
One that deals with a Box Blur method and the other with a Gaussian Blur method.
Box Blur method can be called using argument '-sh bsharpen' --- This method is not dynamic and can produce strong sharpening on your images. Sometimes it can yield great results and sometimes entirely the opposite.
Gaussian Blur method can be called using argument '-sh gsharpen' --- This method is dynamic and tries to adjust to your data set. As a result, while the sharpening effect might not be as strong as bsharpen, it is bound to produce a more natural looking sharpened image.
By default the parameter is set to none which will not run any sharpening on your output.
* Output Sharpening added
Two ways of sharpening your output have been added
-sh bsharpen
-sh gsharpen
* Update convert.py
* Add Improved AutoEncoder model.
* Refactoring Model_IAE to match the new model folder structure
* Add Model_IAE in plugins
* Add Multi-GPU support
I added multi-GPU support to the new model layout. Currently, Original is not tested (due to OOM on my 2x 4gb 970s). LowMem is not tested with the current commit due to it not being available since the new pluginloader misses it.
* Image rotator for extract and convert ready for testing
* Revert "Image rotator for extract and convert ready for testing"
This reverts commit bbeb19ef26.
Error in extract code
* add image rotation support to detect more faces
* Update convert.py
Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.
* Update convert.py
remove type
* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once
* Changed command line flag to take arguments to ease future development
* Realigning for upstream/Master
* Minor fix
* Add negative filters for face detection
When detecting faces that are very similar, the face recognition can
produce positive results for similar looking people. This commit allows
the user to add multiple positive and negative reference images. The
facedetection then calculates the distance to each reference image
and tries to guess which is more likely using the k-nearest method.
* Do not calculate knn if no negative images are given
* Clean up outputting
* Clearer requirements for each platform
* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups
* Adding GAN128
* Update GAN to v2
* Create instance_normalization.py
* Fix decoder output
* Revert "Fix decoder output"
This reverts commit 3a8ecb8957.
* Fix convert
* Enable all options except perceptual_loss by default
* Disable instance norm
* Update Model.py
* Update Trainer.py
* Match GAN128 to shaoanlu's latest v2
* Add first_order to GAN128
* Disable `use_perceptual_loss`
* Fix call to `self.first_order`
* Switch to average loss in output
* Constrain average to last 100 iterations
* Fix math, constrain average to intervals of 100
* Fix math averaging again
* Remove math and simplify this damn averagin
* Add gan128 conversion
* Update convert.py
* Use non-warped images in masked preview
* Add K.set_learning_phase(1) to gan64
* Add K.set_learning_phase(1) to gan128
* Add missing keras import
* Use non-warped images in masked preview for gan128
* Exclude deleted faces from conversion
* --input-aligned-dir defaults to "{input_dir}/aligned"
* Simplify map operation
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.
2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor
fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error
if face location not found by CNN, its try to find by HOG.
removed this:
- if face.landmarks == None:
- print("Warning! landmarks not found. Switching to crop!")
- return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks
* Enabled masked converter for GAN models
* Histogram matching, cli option for perceptual loss
* Fix init() positional args error
* Add backwards compatibility for aligned filenames
* Fix masked converter
* Remove GAN converters
* Allows for negative erosion kernel for -e arg. If value is negative, it turns it into a dilation kernel, which allow facehullandrect to cover more space. Can help to cover double eyebrows. Also could be useful with Masked converter for GAN that oatsss is working on.
* Update convert.py
Modified argument help to clarify the effects of erosion and dilation as parameters
* Preparing GAN plugin
* Adding multithreading for extract
* Adding support for mmod human face detector
* Adding face filter argument
* Added process number argument to multiprocessing extractor.
Fixed progressbar for multiprocessing.
* Added tiff as image type.
compression artefacts hurt my feelings.
* Cleanup
* extended arguments for convert re https://github.com/deepfakes/faceswap/issues/85
* forgot to change helptext for extended arguments.
* Added -fr --frame-range argument to convert
accepts a list of frame ranges like `-fr 40-50 90-100`
still writes out frames that havent been converted.
* added --discard-frames argument
--discard-frames discards frames not included in --frame-range instead
of writing them out unchanged.
* Made training message slightly clearer
* Revert "Made training message slightly clearer"
This reverts commit 25a9744aea.
* Training status now '\r's rather than newlines.
Maybe its good, maybe its bad.
I like it.
* fixing https://github.com/deepfakes/faceswap/pull/90#issuecomment-362309166
* Fixing issues with frame-range if no frame-range is specified.
* fixes for check_skip
* extended arguments for convert re https://github.com/deepfakes/faceswap/issues/85
* forgot to change helptext for extended arguments.
* Added -fr --frame-range argument to convert
accepts a list of frame ranges like `-fr 40-50 90-100`
still writes out frames that havent been converted.
* added --discard-frames argument
--discard-frames discards frames not included in --frame-range instead
of writing them out unchanged.
* Made training message slightly clearer
* Revert "Made training message slightly clearer"
This reverts commit 25a9744aea.
* Training status now '\r's rather than newlines.
Maybe its good, maybe its bad.
I like it.
* fixing https://github.com/deepfakes/faceswap/pull/90#issuecomment-362309166
* extended arguments for convert re https://github.com/deepfakes/faceswap/issues/85
* forgot to change helptext for extended arguments.
* Added -fr --frame-range argument to convert
accepts a list of frame ranges like `-fr 40-50 90-100`
still writes out frames that havent been converted.
* added --discard-frames argument
--discard-frames discards frames not included in --frame-range instead
of writing them out unchanged.
* Making Models as plugins
* Do not reload model on each image #39 + Adding FaceFilter #53
* Adding @lukaville PR for #43 and #44 (possibly)
* Training done in a separate thread
* Better log for plugin load
* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included
* correction of DirectoryProcessor signature
* adding missing import
* Convert with parallel preprocessing of files
* Added coverage var for trainer
Added a var with comment. Feel free to add it as argument
* corrections
* Modifying preview and normalization of image + correction
* Cleanup
* Created a single script to call the other ones.
Usage is ./faceswap.py {train|extract|convert}
* Improved the help from the commands.
* Added forgotten faceswap.py file.
* Changed gitignore to add the scripts.
* Updates gitignore.
* Added guarding not to execute code when imported.
* Removed useless script. Display help when no arguments are provided.
* Update README