1
0
Fork 0
mirror of https://github.com/deepfakes/faceswap synced 2025-06-08 11:53:26 -04:00
Commit graph

25 commits

Author SHA1 Message Date
torzdf
ab7fa48b8b Remove dlib.rectangles. Replace with BoundingBox class 2019-05-07 10:01:00 +00:00
torzdf
27a685383e
Convert refactor (#703)
* Separate predict and implement pool

* Add check and raise error to multithreading

Box functions to config. Add crop box option.

* All masks to mask module. Refactor convert masks

Predicted mask passed from model. Cli update

* Intesect box with mask and fixes

* Use raw NN output for convert

Use raw mask for face adjustments. Split adjustments to pre and post warp

* Separate out adjustments. Add unmask sharpen

* Set sensible defaults. Pre PR Testing

* Fix queue sizes. Move masked.py to lib

* Fix Skip Frames. Fix GUI Config popup

* Sensible queue limits. Add a less resource intensive single processing mode

* Fix predicted mask. Amend smooth box defaults

* Deterministic ordering for video output

* Video to Video convert implemented

* Fixups

- Remove defaults from folders across all stages
- Move match-hist and aca into color adjustments selectable
- Handle crashes properly for pooled processes
- Fix output directory does not exist error when creating a new output folder
- Force output to frames if input is not a video

* Add Color Transfer adjustment method

Wrap info text in GUI plugin configure popup

* Refactor image adjustments. Easier to create plugins

Start implementing config options for video encoding

* Add video encoding config options

Allow video encoding for frames input (must pass in a reference video)
Move video and image output writers to plugins

* Image writers config options

Move scaling to cli
Move draw_transparent to images config
Add config options for cv2 writer
Add Pillow image writer

* Add gif filetype to Pillow. Fix draw transparent for Pillow

* Add Animated GIF writer

standardize opencv/pillow defaults

* [speedup] Pre-encode supported writers in the convert pool (opencv, pillow)

Move scaling to convert pool
Remove dfaker mask

* Fix default writer

* Bugfixes

* Better custom argparse formatting
2019-04-21 19:19:06 +00:00
torzdf
0d567f1696 Fix on-the-fly conversion 2019-02-25 12:51:50 +00:00
torzdf
cd00859c40
model_refactor (#571) (#572)
* model_refactor (#571)

* original model to new structure

* IAE model to new structure

* OriginalHiRes to new structure

* Fix trainer for different resolutions

* Initial config implementation

* Configparse library added

* improved training data loader

* dfaker model working

* Add logging to training functions

* Non blocking input for cli training

* Add error handling to threads. Add non-mp queues to queue_handler

* Improved Model Building and NNMeta

* refactor lib/models

* training refactor. DFL H128 model Implementation

* Dfaker - use hashes

* Move timelapse. Remove perceptual loss arg

* Update INSTALL.md. Add logger formatting. Update Dfaker training

* DFL h128 partially ported

* Add mask to dfaker (#573)

* Remove old models. Add mask to dfaker

* dfl mask. Make masks selectable in config (#575)

* DFL H128 Mask. Mask type selectable in config.

* remove gan_v2_2

* Creating Input Size config for models

Creating Input Size config for models

Will be used downstream in converters.

Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)

* Add mask loss options to config

* MTCNN options to config.ini. Remove GAN config. Update USAGE.md

* Add sliders for numerical values in GUI

* Add config plugins menu to gui. Validate config

* Only backup model if loss has dropped. Get training working again

* bugfixes

* Standardise loss printing

* GUI idle cpu fixes. Graph loss fix.

* mutli-gpu logging bugfix

* Merge branch 'staging' into train_refactor

* backup state file

* Crash protection: Only backup if both total losses have dropped

* Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes)

* Load and save model structure with weights

* Slight code update

* Improve config loader. Add subpixel opt to all models. Config to state

* Show samples... wrong input

* Remove AE topology. Add input/output shapes to State

* Port original_villain (birb/VillainGuy) model to faceswap

* Add plugin info to GUI config pages

* Load input shape from state. IAE Config options.

* Fix transform_kwargs.
Coverage to ratio.
Bugfix mask detection

* Suppress keras userwarnings.
Automate zoom.
Coverage_ratio to model def.

* Consolidation of converters & refactor (#574)

* Consolidation of converters & refactor

Initial Upload of alpha

Items
- consolidate convert_mased & convert_adjust into one converter
-add average color adjust to convert_masked
-allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size
-allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size
-eliminate redundant type conversions to avoid multiple round-off errors
-refactor loops for vectorization/speed
-reorganize for clarity & style changes

TODO
- bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now
- issues with mask border giving black ring at zero erosion .. investigate
- remove GAN ??
- test enlargment factors of umeyama standard face .. match to coverage factor
- make enlargment factor a model parameter
- remove convert_adjusted and referencing code when finished

* Update Convert_Masked.py

default blur size of 2 to match original...
description of enlargement tests
breakout matrxi scaling into def

* Enlargment scale as a cli parameter

* Update cli.py

* dynamic interpolation algorithm

Compute x & y scale factors from the affine matrix on the fly by QR decomp.
Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image

* input size
input size from config

* fix issues with <1.0 erosion

* Update convert.py

* Update Convert_Adjust.py

more work on the way to merginf

* Clean up help note on sharpen

* cleanup seamless

* Delete Convert_Adjust.py

* Update umeyama.py

* Update training_data.py

* swapping

* segmentation stub

* changes to convert.str

* Update masked.py

* Backwards compatibility fix for models
Get converter running

* Convert:
Move masks to class.
bugfix blur_size
some linting

* mask fix

* convert fixes

- missing facehull_rect re-added
- coverage to %
- corrected coverage logic
- cleanup of gui option ordering

* Update cli.py

* default for blur

* Update masked.py

* added preliminary low_mem version of OriginalHighRes model plugin

* Code cleanup, minor fixes

* Update masked.py

* Update masked.py

* Add dfl mask to convert

* histogram fix & seamless location

* update

* revert

* bugfix: Load actual configuration in gui

* Standardize nn_blocks

* Update cli.py

* Minor code amends

* Fix Original HiRes model

* Add masks to preview output for mask trainers
refactor trainer.__base.py

* Masked trainers converter support

* convert bugfix

* Bugfix: Converter for masked (dfl/dfaker) trainers

* Additional Losses (#592)

* initial upload

* Delete blur.py

* default initializer = He instead of Glorot (#588)

* Allow kernel_initializer to be overridable

* Add ICNR Initializer option for upscale on all models.

* Hopefully fixes RSoDs with original-highres model plugin

* remove debug line

* Original-HighRes model plugin Red Screen of Death fix, take #2

* Move global options to _base. Rename Villain model

* clipnorm and res block biases

* scale the end of res block

* res block

* dfaker pre-activation res

* OHRES pre-activation

* villain pre-activation

* tabs/space in nn_blocks

* fix for histogram with mask all set to zero

* fix to prevent two networks with same name

* GUI: Wider tooltips. Improve TQDM capture

* Fix regex bug

* Convert padding=48 to ratio of image size

* Add size option to alignments tool extract

* Pass through training image size to convert from model

* Convert: Pull training coverage from model

* convert: coverage, blur and erode to percent

* simplify matrix scaling

* ordering of sliders in train

* Add matrix scaling to utils. Use interpolation in lib.aligner transform

* masked.py Import get_matrix_scaling from utils

* fix circular import

* Update masked.py

* quick fix for matrix scaling

* testing thus for now

* tqdm regex capture bugfix

* Minor ammends

* blur size cleanup

* Remove coverage option from convert (Now cascades from model)

* Implement convert for all model types

* Add mask option and coverage option to all existing models

* bugfix for model loading on convert

* debug print removal

* Bugfix for masks in dfl_h128 and iae

* Update preview display. Add preview scaling to cli

* mask notes

* Delete training_data_v2.py

errant file

* training data variables

* Fix timelapse function

* Add new config items to state file for legacy purposes

* Slight GUI tweak

* Raise exception if problem with loaded model

* Add Tensorboard support (Logs stored in model directory)

* ICNR fix

* loss bugfix

* convert bugfix

* Move ini files to config folder. Make TensorBoard optional

* Fix training data for unbalanced inputs/outputs

* Fix config "none" test

* Keep helptext in .ini files when saving config from GUI

* Remove frame_dims from alignments

* Add no-flip and warp-to-landmarks cli options

* Revert OHR to RC4_fix version

* Fix lowmem mode on OHR model

* padding to variable

* Save models in parallel threads

* Speed-up of res_block stability

* Automated Reflection Padding

* Reflect Padding as a training option

Includes auto-calculation of proper padding shapes, input_shapes, output_shapes

Flag included in config now

* rest of reflect padding

* Move TB logging to cli. Session info to state file

* Add session iterations to state file

* Add recent files to menu. GUI code tidy up

* [GUI] Fix recent file list update issue

* Add correct loss names to TensorBoard logs

* Update live graph to use TensorBoard and remove animation

* Fix analysis tab. GUI optimizations

* Analysis Graph popup to Tensorboard Logs

* [GUI] Bug fix for graphing for models with hypens in name

* [GUI] Correctly split loss to tabs during training

* [GUI] Add loss type selection to analysis graph

* Fix store command name in recent files. Switch to correct tab on open

* [GUI] Disable training graph when 'no-logs' is selected

* Fix graphing race condition

* rename original_hires model to unbalanced
2019-02-09 18:35:12 +00:00
torzdf
3a41cfdcbc
Add Face Hashes to Alignments (#550)
* Convert main scripts to use face hashes

* Alignment tool: Use hashes, add logging, add face rename function

* More logging. Update Manual tool to work with hashing
2018-12-14 12:49:11 +00:00
torzdf
7f53911453
Logging (#541)
* Convert prints to logger. Further logging improvements. Tidy  up

* Fix system verbosity. Allow SystemExit

* Fix reload extract bug

* Child Traceback handling

* Safer shutdown procedure

* Add shutdown event to queue manager

* landmarks_as_xy > property. GUI notes + linting. Aligner bugfix

* fix FaceFilter. Enable nFilter when no Filter is supplied

* Fix blurry face filter

* Continue on IO error. Better error handling

* Explicitly print stack trace tocrash log

* Windows Multiprocessing bugfix

* Add git info and conda version to crash log

* Windows/Anaconda mp bugfix

* Logging fixes for training
2018-12-04 13:31:49 +00:00
torzdf
3df1371070 Fix face filter 2018-11-25 17:58:08 +00:00
torzdf
2783c27e00
Fix convert-adjust (#531)
* Add dimensions to alignments + refactor

* Add frame_dims + funcs to DetectFaces. Add alignments lib

* Convert Adjust working

* Refactor and tidy up
2018-11-07 20:21:22 +00:00
torzdf
ca63242996
Extraction - Speed improvements (#522) (#523)
* Extraction - Speed improvements (#522)

* Initial Plugin restructure

* Detectors to plugins. Detector speed improvements

* Re-implement dlib aligner, remove models, FAN to TF. Parallel processing

* Update manual, update convert, implement parallel/serial switching

* linting + fix cuda check (setup.py). requirements update keras 2.2.4

* Add extract size option. Fix dlib hog init

* GUI: Increase tooltip width

* Update alignment tool to support new DetectedFace

* Add skip existing faces option

* Fix sort tool to new plugin structure

* remove old align plugin

* fix convert -skip faces bug

* Fix convert skipping no faces frames

* Convert - draw onto transparent layer

* Fix blur threshold bug

* fix skip_faces convert bug

* Fix training
2018-10-27 10:12:08 +01:00
torzdf
3c83a068ff Add option to change dlib-cnn buffer size 2018-09-06 11:16:55 +01:00
torzdf
e76ab16bfe
Effmpeg preview, MTCNN Extractor. HiRes model fixes (#458)
* Add preview functionality to effmpeg.  (#435)

* Add preview functionality to effmpeg.

effmpeg tool:
Preview for actions that have a video output now available.
Preview does not work when muxing audio.

* Model json unicode fix1 (#443)

* fixed Windows 10 path error while loading weights

* - fixed TypeError: the JSON object must be str, not 'bytes' with OriginalHighRes Model

* MTCNN Extractor and Extraction refactor (#453)

* implement mtcnn extractor

* mtcnn refactor and vram management changes

* cli arguments update for mtcnn/dlib split

* Add mtcnn models to gitignore

* Change multiprocessing on extract

* GUI changes to handle nargs defaults

* Early exit bugfix (#455)

* Fix extract early termination bug

* Fix extract early exit bug

* Multi face detection bugfix (#456)

* Multi face extraction fix

* Original high res cleanup 1 (#457)

* slight model re-factoring
  - removed excess threading code
  - added random kernel initialization to dense layer

* Slight OriginalHighRes re-factoring an code cleanup
2018-07-03 10:42:58 +02:00
babilio
83c36b9489 Improving performance of extraction. Two main changes to improve the … (#259)
* Improving performance of extraction. Two main changes to improve the most recent modifications to extract: 1st FaceLandmarkExtractor would try to use cnn first, then try hog. The problem was that this reduced the speed by 4 for images where cnn didn't find anything, and most of the times hog wouldn't find anything either or it would be a bad extract. For me it wasn't worth it. With this you can specify on input -D if you want to use hog, cnn, or all. 'all' will try cnn, then hog like FaceLandmarkExtractor was doing. cnn or hog will just use 1 detection method. 2nd change is a rehaul of the verbose parameter. Now warnings when a face is not detected will just be shown if indicated by -v or --verbose. This restores the verbose function to what it once was. With this change I was able to process 1,000 per each 4 minutes regardless if faces were detected or not. Performance improvement just applies to not detected images but I normally will have lots of images without clear faces in my set, so I figured it would impact others. Also the introduction of 'all' would allow trying other models together more easily in the future.

* Update faces_detect.py

* Update extract.py

* Update FaceLandmarksExtractor.py

* spacing fix
2018-03-11 11:36:50 +01:00
torzdf
0ea743029d Fix for missing default rotation value (#269)
* Image rotator for extract and convert ready for testing

* Revert "Image rotator for extract and convert ready for testing"

This reverts commit bbeb19ef26.

Error in extract code

* add image rotation support to detect more faces

* Update convert.py

Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.

* Update convert.py

remove type

* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once

* Changed command line flag to take arguments to ease future development

* Realigning for upstream/Master

* Minor fix

* Change default rotation value from None to 0
2018-03-10 19:46:08 +01:00
torzdf
ee6bc40224 Add image rotation for detecting more faces and dealing with awkward angles (#253)
* Image rotator for extract and convert ready for testing

* Revert "Image rotator for extract and convert ready for testing"

This reverts commit bbeb19ef26.

Error in extract code

* add image rotation support to detect more faces

* Update convert.py

Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.

* Update convert.py

remove type

* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once

* Changed command line flag to take arguments to ease future development

* Realigning for upstream/Master

* Minor fix
2018-03-10 13:34:19 +01:00
iperov
232d9313af port 'face_alignment' from PyTorch to Keras. (#228)
* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.

2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor

fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error

if face location not found by CNN, its try to find by HOG.

removed this:
-        if face.landmarks == None:
-            print("Warning! landmarks not found. Switching to crop!")
-            return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks

* removing DetectedFace.landmarks
2018-03-01 11:32:11 +01:00
Gareth Dunstone
29eec42594 Added serializers.
This speeds up convert speed x4 on my machine.
2018-02-09 02:44:31 +11:00
Clorr
b3ae6130ed
Misc updates on master before GAN. Added multithreading + mmod face detector (#109)
* Preparing GAN plugin

* Adding multithreading for extract

* Adding support for mmod human face detector

* Adding face filter argument

* Added process number argument to multiprocessing extractor.

Fixed progressbar for multiprocessing.

* Added tiff as image type.
compression artefacts hurt my feelings.

* Cleanup
2018-02-07 13:42:19 +01:00
Clorr
34945cfcd7
Adding models as plugins + Face filtering (#53) + #39 + #43 + #44 + #49 (#61)
* Making Models as plugins

* Do not reload model on each image #39 + Adding FaceFilter #53

* Adding @lukaville PR for #43 and #44 (possibly)

* Training done in a separate thread

* Better log for plugin load

* Adding a prefetch to train.py #49
(Note that we prefetch 2 batches of images, due to the queue behavior)
+ More compact logging with verbose info included

* correction of DirectoryProcessor signature

* adding missing import

* Convert with parallel preprocessing of files

* Added coverage var for trainer

Added a var with comment. Feel free to add it as argument

* corrections

* Modifying preview and normalization of image + correction

* Cleanup
2018-01-31 18:56:44 +01:00
Clorr
bb489f4f51 Adding new plugins (Extract_Align & Convert_Masked) 2018-01-03 10:33:42 +01:00
Clorr
3e2976ab03 Adding plugins 2018-01-03 10:33:39 +01:00
Hidde Jansen
7f7f110933
Command line options for convert.py (#13)
* Rewrite extract.py, make code re-usable for conversion too.

* Add convert.py, adding command line options
2017-12-22 01:52:47 +01:00
Clorr
f542c58a48 Adding face_recognition
Added face_recongition from
https://github.com/ageitgey/face_recognition/

It is based on dlib, but does not have the glitches I had previously!

Also reworked Dockerfile (sticking to Python2 for now)
2017-12-21 23:13:47 +01:00
Clorr
fb9f641ccd Adding a DetectedFace class to retain face data 2017-12-21 21:31:05 +01:00
Hidde Jansen
d0c02a2ba8
PEP8 / Codestyle (#11) 2017-12-20 21:27:14 +01:00
Colin LORRAIN
845a6ae0c6 Creating lib folder 2017-12-19 12:57:56 +01:00
Renamed from faces_detect.py (Browse further)