* Add ability to extract from and convert from a video file
* Update cli helptext. Add filebrowser button for GUI input
* Add video support to Alignments Tool
* Convert main scripts to use face hashes
* Alignment tool: Use hashes, add logging, add face rename function
* More logging. Update Manual tool to work with hashing
* DLib scaling amends (#461)
* Scaling area fix and rework of frames and bounding boxes
* Update DLib scaling and initialization
* Extract - Detector reinitialization bugfix
* Slight OriginaHightRes Model re-factoring (#464)
* Model re-factoring, might fix red screen of death on multi-gpu configurations
* Limited trainer support for various INPUT_SIZE
* Auto Extraction multiprocess bugfix
* Sort tool bugfixes and linting
* Add preview functionality to effmpeg. (#435)
* Add preview functionality to effmpeg.
effmpeg tool:
Preview for actions that have a video output now available.
Preview does not work when muxing audio.
* Model json unicode fix1 (#443)
* fixed Windows 10 path error while loading weights
* - fixed TypeError: the JSON object must be str, not 'bytes' with OriginalHighRes Model
* MTCNN Extractor and Extraction refactor (#453)
* implement mtcnn extractor
* mtcnn refactor and vram management changes
* cli arguments update for mtcnn/dlib split
* Add mtcnn models to gitignore
* Change multiprocessing on extract
* GUI changes to handle nargs defaults
* Early exit bugfix (#455)
* Fix extract early termination bug
* Fix extract early exit bug
* Multi face detection bugfix (#456)
* Multi face extraction fix
* Original high res cleanup 1 (#457)
* slight model re-factoring
- removed excess threading code
- added random kernel initialization to dense layer
* Slight OriginalHighRes re-factoring an code cleanup
* Pre push commit.
Add filetypes support to gui through new classes in lib/cli.py
Add various new functions to tools/effmpeg.py
* Finish developing basic effmpeg functionality.
Ready for public alpha test.
* Add ffmpy to requirements.
Fix gen-vid to allow specifying a new file in GUI.
Fix extract throwing an error when supplied with a valid directory.
Add two new gui user pop interactions: save (allows you to create new
files/directories) and nothing (disables the prompt button when it's not
needed).
Improve logic and argument processing in effmpeg.
* Fix post merge bugs.
Reformat tools.py to match the new style of faceswap.py
Fix some whitespace issues.
* Fix matplotlib.use() being called after pyplot was imported.
* Fix various effmpeg bugs and add ability do terminate nested subprocess
to GUI.
effmpeg changes:
Fix get-fps not printing to terminal.
Fix mux-audio not working.
Add verbosity option. If verbose is not specified than ffmpeg output is
reduced with the -hide_banner flag.
scripts/gui.py changes:
Add ability to terminate nested subprocesses, i.e. the following type of
process tree should now be terminated safely:
gui -> command -> command-subprocess
-> command-subprocess -> command-sub-subprocess
* Add functionality to tools/effmpeg.py, fix some docstring and print statement issues in some files.
tools/effmpeg.py:
Transpose choices now display detailed name in GUI, while in cli they can
still be entered as a number or the full command name.
Add quiet option to effmpeg that only shows critical ffmpeg errors.
Improve user input handling.
lib/cli.py; scripts/convert.py; scripts/extract.py; scripts/train.py:
Fix some line length issues and typos in docstrings, help text and print statements.
Fix some whitespace issues.
lib/cli.py:
Add filetypes to '--alignments' argument.
Change argument action to DirFullPaths where appropriate.
* Bug fixes and improvements to tools/effmpeg.py
Fix bug where duration would not be used even when end time was not set.
Add option to specify output filetype for extraction.
Enchance gen-vid to be able to generate a video from images that were zero padded to any arbitrary number, and not just 5.
Enchance gen-vid to be able to use any of the image formats that a video can be extracted into.
Improve gen-vid output video quality.
Minor code quality improvements and ffmpeg argument formatting improvements.
* Remove dependency on psutil in scripts/gui.py and various small improvements.
lib/utils.py:
Add _image_extensions and _video_extensions as global variables to make them easily portable across all of faceswap.
Fix lack of new lines between function and class declarions to conform to PEP8.
Fix some typos and line length issues in doctsrings and comments.
scripts/convert.py:
Make tqdm print to stdout.
scripts/extract.py:
Make tqdm print to stdout.
Apply workaround for occasional TqdmSynchronisationWarning being thrown.
Fix some typos and line length issues in doctsrings and comments.
scripts/fsmedia.py:
Did TODO in scripts/fsmedia.py in Faces.load_extractor(): TODO Pass extractor_name as argument
Fix lack of new lines between function and class declarions to conform to PEP8.
Fix some typos and line length issues in doctsrings and comments.
Change 2 print statements to use format() for string formatting instead of the old '%'.
scripts/gui.py:
Refactor subprocess generation and termination to remove dependency on psutil.
Fix some typos and line length issues in comments.
tools/effmpeg.py
Refactor DataItem class to use new lib/utils.py global media file extensions.
Improve ffmpeg subprocess termination handling.
* Refactor for PEP 8 and split process function
* Remove backwards compatibility for skip frames
* Split optional functions into own class. Make functions more modular
* Conform scripts folder to PEP 8
* train.py - Fix write image bug. Make more modular
* extract.py - Make more modular, Put optional actions into own class
* cli.py - start PEP 8
* cli,py - Pep 8. Refactor and make modular. Bugfixes
* 1st round refactor. Completely untested and probably broken.
* convert.py: Extract alignments from frames if they don't exist
* BugFix: SkipExisting broken since face name refactor
* Extract.py tested
* Minor formatting
* convert.py + train.py amended not tested
* train.py - Semi-fix for hang on reaching target iteration. Now quits on preview mode
Make tensorflow / system warning less verbose
* 2nd pass refactor. Semi tested
bugfixes
* Remove obsolete code. imread/write to Utils
* rename inout.py to fsmedia.py
* Final bugfixes
* Add support for user-specified rotation angle in extract
* Added rotation-angle-list option to enumerate a list of angles to rotate through
* Adjust rotation matrix translation coords to avoid cropping
* Merged rotation-angle and rotation-angle-list options into rotate_images option
* Backwards compatibility
* Updated check whether to run image rotator
* Switched rotation convention to use positive angle = clockwise rotation, for backwards compatibility
* Improving performance of extraction. Two main changes to improve the most recent modifications to extract: 1st FaceLandmarkExtractor would try to use cnn first, then try hog. The problem was that this reduced the speed by 4 for images where cnn didn't find anything, and most of the times hog wouldn't find anything either or it would be a bad extract. For me it wasn't worth it. With this you can specify on input -D if you want to use hog, cnn, or all. 'all' will try cnn, then hog like FaceLandmarkExtractor was doing. cnn or hog will just use 1 detection method. 2nd change is a rehaul of the verbose parameter. Now warnings when a face is not detected will just be shown if indicated by -v or --verbose. This restores the verbose function to what it once was. With this change I was able to process 1,000 per each 4 minutes regardless if faces were detected or not. Performance improvement just applies to not detected images but I normally will have lots of images without clear faces in my set, so I figured it would impact others. Also the introduction of 'all' would allow trying other models together more easily in the future.
* Update faces_detect.py
* Update extract.py
* Update FaceLandmarksExtractor.py
* spacing fix
* Image rotator for extract and convert ready for testing
* Revert "Image rotator for extract and convert ready for testing"
This reverts commit bbeb19ef26.
Error in extract code
* add image rotation support to detect more faces
* Update convert.py
Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.
* Update convert.py
remove type
* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once
* Changed command line flag to take arguments to ease future development
* Realigning for upstream/Master
* Minor fix
* Add negative filters for face detection
When detecting faces that are very similar, the face recognition can
produce positive results for similar looking people. This commit allows
the user to add multiple positive and negative reference images. The
facedetection then calculates the distance to each reference image
and tries to guess which is more likely using the k-nearest method.
* Do not calculate knn if no negative images are given
* Clean up outputting
* Clearer requirements for each platform
* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups
* Adding GAN128
* Update GAN to v2
* Create instance_normalization.py
* Fix decoder output
* Revert "Fix decoder output"
This reverts commit 3a8ecb8957.
* Fix convert
* Enable all options except perceptual_loss by default
* Disable instance norm
* Update Model.py
* Update Trainer.py
* Match GAN128 to shaoanlu's latest v2
* Add first_order to GAN128
* Disable `use_perceptual_loss`
* Fix call to `self.first_order`
* Switch to average loss in output
* Constrain average to last 100 iterations
* Fix math, constrain average to intervals of 100
* Fix math averaging again
* Remove math and simplify this damn averagin
* Add gan128 conversion
* Update convert.py
* Use non-warped images in masked preview
* Add K.set_learning_phase(1) to gan64
* Add K.set_learning_phase(1) to gan128
* Add missing keras import
* Use non-warped images in masked preview for gan128
* Exclude deleted faces from conversion
* --input-aligned-dir defaults to "{input_dir}/aligned"
* Simplify map operation
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.
2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor
fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error
if face location not found by CNN, its try to find by HOG.
removed this:
- if face.landmarks == None:
- print("Warning! landmarks not found. Switching to crop!")
- return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks
* Enabled masked converter for GAN models
* Histogram matching, cli option for perceptual loss
* Fix init() positional args error
* Add backwards compatibility for aligned filenames
* Fix masked converter
* Remove GAN converters
* Pytorch and face-alignment
* Skip processed frames when extracting faces.
* Reset to master version
* Reset to master
* Added --skip-existing argument to Extract script. Default is to NOT skip already processed frames.
Added logic to write_alignments to append new alignments (and preserve existing ones)
to existing alignments file when the skip-existing option is used.
* Fixed exception for --skip-existing when using the convert script
* Sync with upstream
* Fixed error when using Convert script.
* Bug fix
* Merges alignments only if --skip-existing is used.
* Creates output dir when not found, even when using --skip-existing.
Refactored extract to have handleImage take in a image instead of a filename. This will be useful for writing future extensions for extract where you may not have a file on disk but you do have a image in memory. For example extracting faces from a video.
* Preparing GAN plugin
* Adding multithreading for extract
* Adding support for mmod human face detector
* Adding face filter argument
* Added process number argument to multiprocessing extractor.
Fixed progressbar for multiprocessing.
* Added tiff as image type.
compression artefacts hurt my feelings.
* Cleanup
* Making Models as plugins
* Do not reload model on each image #39 + Adding FaceFilter #53
* Adding @lukaville PR for #43 and #44 (possibly)
* Training done in a separate thread
* Better log for plugin load
* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included
* correction of DirectoryProcessor signature
* adding missing import
* Convert with parallel preprocessing of files
* Added coverage var for trainer
Added a var with comment. Feel free to add it as argument
* corrections
* Modifying preview and normalization of image + correction
* Cleanup
* Created a single script to call the other ones.
Usage is ./faceswap.py {train|extract|convert}
* Improved the help from the commands.
* Added forgotten faceswap.py file.
* Changed gitignore to add the scripts.
* Updates gitignore.
* Added guarding not to execute code when imported.
* Removed useless script. Display help when no arguments are provided.
* Update README