* Add preview functionality to effmpeg. (#435)
* Add preview functionality to effmpeg.
effmpeg tool:
Preview for actions that have a video output now available.
Preview does not work when muxing audio.
* Model json unicode fix1 (#443)
* fixed Windows 10 path error while loading weights
* - fixed TypeError: the JSON object must be str, not 'bytes' with OriginalHighRes Model
* MTCNN Extractor and Extraction refactor (#453)
* implement mtcnn extractor
* mtcnn refactor and vram management changes
* cli arguments update for mtcnn/dlib split
* Add mtcnn models to gitignore
* Change multiprocessing on extract
* GUI changes to handle nargs defaults
* Early exit bugfix (#455)
* Fix extract early termination bug
* Fix extract early exit bug
* Multi face detection bugfix (#456)
* Multi face extraction fix
* Original high res cleanup 1 (#457)
* slight model re-factoring
- removed excess threading code
- added random kernel initialization to dense layer
* Slight OriginalHighRes re-factoring an code cleanup
* Improving performance of extraction. Two main changes to improve the most recent modifications to extract: 1st FaceLandmarkExtractor would try to use cnn first, then try hog. The problem was that this reduced the speed by 4 for images where cnn didn't find anything, and most of the times hog wouldn't find anything either or it would be a bad extract. For me it wasn't worth it. With this you can specify on input -D if you want to use hog, cnn, or all. 'all' will try cnn, then hog like FaceLandmarkExtractor was doing. cnn or hog will just use 1 detection method. 2nd change is a rehaul of the verbose parameter. Now warnings when a face is not detected will just be shown if indicated by -v or --verbose. This restores the verbose function to what it once was. With this change I was able to process 1,000 per each 4 minutes regardless if faces were detected or not. Performance improvement just applies to not detected images but I normally will have lots of images without clear faces in my set, so I figured it would impact others. Also the introduction of 'all' would allow trying other models together more easily in the future.
* Update faces_detect.py
* Update extract.py
* Update FaceLandmarksExtractor.py
* spacing fix
* Image rotator for extract and convert ready for testing
* Revert "Image rotator for extract and convert ready for testing"
This reverts commit bbeb19ef26.
Error in extract code
* add image rotation support to detect more faces
* Update convert.py
Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.
* Update convert.py
remove type
* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once
* Changed command line flag to take arguments to ease future development
* Realigning for upstream/Master
* Minor fix
* Change default rotation value from None to 0
* Image rotator for extract and convert ready for testing
* Revert "Image rotator for extract and convert ready for testing"
This reverts commit bbeb19ef26.
Error in extract code
* add image rotation support to detect more faces
* Update convert.py
Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.
* Update convert.py
remove type
* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once
* Changed command line flag to take arguments to ease future development
* Realigning for upstream/Master
* Minor fix
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.
2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor
fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error
if face location not found by CNN, its try to find by HOG.
removed this:
- if face.landmarks == None:
- print("Warning! landmarks not found. Switching to crop!")
- return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks
* removing DetectedFace.landmarks
* Preparing GAN plugin
* Adding multithreading for extract
* Adding support for mmod human face detector
* Adding face filter argument
* Added process number argument to multiprocessing extractor.
Fixed progressbar for multiprocessing.
* Added tiff as image type.
compression artefacts hurt my feelings.
* Cleanup
* Making Models as plugins
* Do not reload model on each image #39 + Adding FaceFilter #53
* Adding @lukaville PR for #43 and #44 (possibly)
* Training done in a separate thread
* Better log for plugin load
* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included
* correction of DirectoryProcessor signature
* adding missing import
* Convert with parallel preprocessing of files
* Added coverage var for trainer
Added a var with comment. Feel free to add it as argument
* corrections
* Modifying preview and normalization of image + correction
* Cleanup
Added face_recongition from
https://github.com/ageitgey/face_recognition/
It is based on dlib, but does not have the glitches I had previously!
Also reworked Dockerfile (sticking to Python2 for now)