Documentation
- Update Usage.md, align.rst and image.rst
lib.image.py
- read_image - Remove hash return, add metadata return
- Remove read_image_hash functions
- Add read_image_meta functios
- Replace encode_image_with_hash with encode_image (to store metadata)
- Add png meta reading and writing functions
- Update Image Loaders/Savers to handle metadata rather than hashes
lib.training_data
- Naming updates to remove references to hashes
lib.align.Alignments
- Add versioning notes
- Increment alignments version to 2.1
- Deprecate hashing lookup functions
- Replace filter_hashes with filter_faces
lib.align.detected_face
- DetectedFace
- Remove hash property
- Add png header data serializing/deserializing functions
- Mask
- Add png header data serializing/deserializing functions
- add update_legacy_png_header function to update png meta data
lib.cli.args - Deprecate alignments files for training
- plugins.train.trainer
- Update alignments/mask code to read png header data
- scripts.convert
- Aligned images folder - read data from png headers
- scripts.extract
- Write png header information and no longer store hash of face
- tools.alignments
- remove leftover-faces, merge and update-hashes jobs
- Update jobs to use png meta data rather than hashes
- tools.manual
- Update extract code to output png meta data and don't store hashes
- Perform check on launch that tool is not pointing at a faces folder
tools.mask
- Update to use png meta data
tools.sort
- Update to use png meta data
* Extract
- Implement aligner re-feeding
- Add extract type to pipeline.ExtractMedia
- Add pose annotation to debug
* Convert
- implement centering
- remove usage of feed and reference face properties
- Remove distributed option from convert
- Force update of alignments file on legacy receive
* Train
- Resize preview image to model output size
- Force legacy centering if centering does not exist in model's state file
- Enable training on legacy face sets
* Alignments Tool
- Update draw to include head/pose
- Remove DFL drop + linting
- Remove remove-frames job
- remove align-eyes option
- Update legacy masks to new extract type
- Exit if attempting to merge version 1.0 alignments files with version 2.0 alignments files
- Re-generate thumbnails on legacy upgrade
* Mask Tool
- Update for new extract + bugfix full frame
* Manual Tool
- Update to new extraction method
- Disable legacy alignments,
- extract box bugfix
- extract faces - size to 512 and center on head
* Preview Tool
- Display based on model centering
* Sort Tool
- Use alignments for sort by face
* lib.aligner
- Add Pose Class
- Add AlignedFace Class
- center _MEAN_FACE on x
- Add meta information with versioning to alignments file
- lib.aligner.get_align_matrix to use landmarks not face
- Refactor aligned faces in lib.faces_detect
* lib.logger
- larger file log padding
* lib.config
- Fix global changeable_items
* lib.face_filter
- Use new extracted face images
* lib.image
- bump thumbnail default size to 96px
* Core Updates
- Remove lib.utils.keras_backend_quiet and replace with get_backend() where relevant
- Document lib.gpu_stats and lib.sys_info
- Remove call to GPUStats.is_plaidml from convert and replace with get_backend()
- lib.gui.menu - typofix
* Update Dependencies
Bump Tensorflow Version Check
* Port extraction to tf2
* Add custom import finder for loading Keras or tf.keras depending on backend
* Add `tensorflow` to KerasFinder search path
* Basic TF2 training running
* model.initializers - docstring fix
* Fix and pass tests for tf2
* Replace Keras backend tests with faceswap backend tests
* Initial optimizers update
* Monkey patch tf.keras optimizer
* Remove custom Adam Optimizers and Memory Saving Gradients
* Remove multi-gpu option. Add Distribution to cli
* plugins.train.model._base: Add Mirror, Central and Default distribution strategies
* Update tensorboard kwargs for tf2
* Penalized Loss - Fix for TF2 and AMD
* Fix syntax for tf2.1
* requirements typo fix
* Explicit None for clipnorm if using a distribution strategy
* Fix penalized loss for distribution strategies
* Update Dlight
* typo fix
* Pin to TF2.2
* setup.py - Install tensorflow from pip if not available in Conda
* Add reduction options and set default for mirrored distribution strategy
* Explicitly use default strategy rather than nullcontext
* lib.model.backup_restore documentation
* Remove mirrored strategy reduction method and default based on OS
* Initial restructure - training
* Remove PingPong
Start model.base refactor
* Model saving and resuming enabled
* More tidying up of model.base
* Enable backup and snapshotting
* Re-enable state file
Remove loss names from state file
Fix print loss function
Set snapshot iterations correctly
* Revert original model to Keras Model structure rather than custom layer
Output full model and sub model summary
Change NNBlocks to callables rather than custom keras layers
* Apply custom Conv2D layer
* Finalize NNBlock restructure
Update Dfaker blocks
* Fix reloading model under a different distribution strategy
* Pass command line arguments through to trainer
* Remove training_opts from model and reference params directly
* Tidy up model __init__
* Re-enable tensorboard logging
Suppress "Model Not Compiled" warning
* Fix timelapse
* lib.model.nnblocks - Bugfix residual block
Port dfaker
bugfix original
* dfl-h128 ported
* DFL SAE ported
* IAE Ported
* dlight ported
* port lightweight
* realface ported
* unbalanced ported
* villain ported
* lib.cli.args - Update Batchsize + move allow_growth to config
* Remove output shape definition
Get image sizes per side rather than globally
* Strip mask input from encoder
* Fix learn mask and output learned mask to preview
* Trigger Allow Growth prior to setting strategy
* Fix GUI Graphing
* GUI - Display batchsize correctly + fix training graphs
* Fix penalized loss
* Enable mixed precision training
* Update analysis displayed batch to match input
* Penalized Loss - Multi-GPU Fix
* Fix all losses for TF2
* Fix Reflect Padding
* Allow different input size for each side of the model
* Fix conv-aware initialization on reload
* Switch allow_growth order
* Move mixed_precision to cli
* Remove distrubution strategies
* Compile penalized loss sub-function into LossContainer
* Bump default save interval to 250
Generate preview on first iteration but don't save
Fix iterations to start at 1 instead of 0
Remove training deprecation warnings
Bump some scripts.train loglevels
* Add ability to refresh preview on demand on pop-up window
* Enable refresh of training preview from GUI
* Fix Convert
Debug logging in Initializers
* Fix Preview Tool
* Update Legacy TF1 weights to TF2
Catch stats error on loading stats with missing logs
* lib.gui.popup_configure - Make more responsive + document
* Multiple Outputs supported in trainer
Original Model - Mask output bugfix
* Make universal inference model for convert
Remove scaling from penalized mask loss (now handled at input to y_true)
* Fix inference model to work properly with all models
* Fix multi-scale output for convert
* Fix clipnorm issue with distribution strategies
Edit error message on OOM
* Update plaidml losses
* Add missing file
* Disable gmsd loss for plaidnl
* PlaidML - Basic training working
* clipnorm rewriting for mixed-precision
* Inference model creation bugfixes
* Remove debug code
* Bugfix: Default clipnorm to 1.0
* Remove all mask inputs from training code
* Remove mask inputs from convert
* GUI - Analysis Tab - Docstrings
* Fix rate in totals row
* lib.gui - Only update display pages if they have focus
* Save the model on first iteration
* plaidml - Fix SSIM loss with penalized loss
* tools.alignments - Remove manual and fix jobs
* GUI - Remove case formatting on help text
* gui MultiSelect custom widget - Set default values on init
* vgg_face2 - Move to plugins.extract.recognition and use plugins._base base class
cli - Add global GPU Exclude Option
tools.sort - Use global GPU Exlude option for backend
lib.model.session - Exclude all GPUs when running in CPU mode
lib.cli.launcher - Set backend to CPU mode when all GPUs excluded
* Cascade excluded devices to GPU Stats
* Explicit GPU selection for Train and Convert
* Reduce Tensorflow Min GPU Multiprocessor Count to 4
* remove compat.v1 code from extract
* Force TF to skip mixed precision compatibility check if GPUs have been filtered
* Add notes to config for non-working AMD losses
* Rasie error if forcing extract to CPU mode
* Fix loading of legace dfl-sae weights + dfl-sae typo fix
* Remove unused requirements
Update sphinx requirements
Fix broken rst file locations
* docs: lib.gui.display
* clipnorm amd condition check
* documentation - gui.display_analysis
* Documentation - gui.popup_configure
* Documentation - lib.logger
* Documentation - lib.model.initializers
* Documentation - lib.model.layers
* Documentation - lib.model.losses
* Documentation - lib.model.nn_blocks
* Documetation - lib.model.normalization
* Documentation - lib.model.session
* Documentation - lib.plaidml_stats
* Documentation: lib.training_data
* Documentation: lib.utils
* Documentation: plugins.train.model._base
* GUI Stats: prevent stats from using GPU
* Documentation - Original Model
* Documentation: plugins.model.trainer._base
* linting
* unit tests: initializers + losses
* unit tests: nn_blocks
* bugfix - Exclude gpu devices in train, not include
* Enable Exclude-Gpus in Extract
* Enable exclude gpus in tools
* Disallow multiple plugin types in a single model folder
* Automatically add exclude_gpus argument in for cpu backends
* Cpu backend fixes
* Relax optimizer test threshold
* Default Train settings - Set mask to Extended
* Update Extractor cli help text
Update to Python 3.8
* Fix FAN to run on CPU
* lib.plaidml_tools - typofix
* Linux installer - check for curl
* linux installer - typo fix
* Clarification of Phases
When multi-processing, the status bar description lists the phase as only Detect. This is misleading and there likely should be a reference to the multiple simutaneous phases being run.
* Simplify Communication
* 1st Round update for Python 3.7, TF1.15, Keras2.3
Move Tensorflow logging verbosity prior to first tensorflow import
Keras Optimizers and nn_block update
lib.logger - Change tf deprecation messages from WARNING to DEBUG
Raise Tensorflow Max version check to 1.15
Update requirements and conda check for python 3.7+
Update install scripts, travis and documentation to Python 3.7
* Revert Keras to 2.2.4
* Smart Masks - Training
- Reinstate smart mask training code
- Reinstate mask_type back to model.config
- change 'replicate_input_mask to 'learn_mask'
- Add learn mask option
- Add mask loading from alignments to plugins.train.trainer
- Add mask_blur and mask threshold options
- _base.py - Pass mask options through training_opts dict
- plugins.train.model - check for mask_type not None for learn_mask and penalized_mask_loss
- Limit alignments loading to just those faces that appear in the training folder
- Raise error if not all training images have an alignment, and alignment file is required
- lib.training_data - Mask generation code
- lib.faces_detect - cv2 dimension stripping bugfix
- Remove cv2 linting code
* Update mask helptext in cli.py
* Fix Warp to Landmarks
Remove SHA1 hashing from training data
* Update mask training config
* Capture missing masks at training init
* lib.image.read_image_batch - Return filenames with batch for ordering
* scripts.train - Documentation
* plugins.train.trainer - documentation
* Ensure backward compatibility.
Fix convert for new predicted masks
* Update removed masks to components for legacy models.
Limit queue sizes to reduce RAM usage
Rename lib.image.BackgroundIO to ImageIO
Create separate ImagesLoader and ImagesSaver classes
Load/Save images from centralized lib.image.ImageIO
scripts.extract documentation
- Add new serializers (npy + compressed)
- Remove Serializer option from cli
- Revert get_aligned call in scripts/extract
- Default alignments to compressed
- Size masks to 128px and compress
- Remove mask thresholding/blur from generation code
- Add Mask class to lib/faces_detect
- Revert debug landmarks to aligned face
- Revert non-extraction code to staging version
* Move image utils to lib.image
* Add .pylintrc file
* Remove some cv2 pylint ignores
* TrainingData: Load images from disk in batches
* TrainingData: get_landmarks to batch
* TrainingData: transform and flip to batches
* TrainingData: Optimize color augmentation
* TrainingData: Optimize target and random_warp
* TrainingData - Convert _get_closest_match for batching
* TrainingData: Warp To Landmarks optimized
* Save models to threadpoolexecutor
* Move stack_images, Rename ImageManipulation. ImageAugmentation Docstrings
* Masks: Set dtype and threshold for lib.masks based on input face
* Docstrings and Documentation
* requirements.txt: - Pin opencv to 4.1.1 (fixes cv2-dnn error)
* lib.face_detect.DetectedFace: change LandmarksXY to landmarks_xy. Add left, right, top, bottom attributes
* lib.model.session: Session manager for loading models into different graphs (for Nvidia + CPU)
* plugins.extract._base: New parent class for all extract plugins
* plugins.extract.pipeline. Remove MultiProcessing. Dynamically limit batchsize for Nvidia cards. Remove loglevel input
* S3FD + FAN plugins. Standardise to Keras version for all backends
* Standardize all extract plugins to new threaded codebase
* Documentation. Start implementing Numpy style docstrings for Sphinx Documentation
* Remove s3fd_amd. Change convert OTF to expect DetectedFace object
* faces_detect - clean up and documentation
* Remove PoolProcess
* Migrate manual tool to new extract workflow
* Remove AMD specific extractor code from cli and plugins
* Sort tool to new extract workflow
* Remove multiprocessing from project
* Remove multiprocessing queues from QueueManager
* Remove multiprocessing support from logger
* Move face_filter to new extraction pipeline
* Alignments landmarksXY > landmarks_xy and legacy handling
* Intercept get_backend for sphinx doc build
# Add Sphinx documentation
Bugfix: Fully disable keypress monitor for GUI
Bugfix: Preview - Handle missing alignments file
Config changes:
- Separate plugin defaults into their own files
- Move mask_type to global training config
- Add ability to pass in custom config files
bug in extract.py: because of missing parentheses in a modulo operation, a condition on save-interval was never met. alignements.json
file was never generated when starting extract script with -si option.
* Implement extraction pipeline
* Face filter to vgg_face. Resume partial model downloads
* On-the-fly conversion to extraction pipeline
* Move git model ids from get_model to model definition
* model_refactor (#571)
* original model to new structure
* IAE model to new structure
* OriginalHiRes to new structure
* Fix trainer for different resolutions
* Initial config implementation
* Configparse library added
* improved training data loader
* dfaker model working
* Add logging to training functions
* Non blocking input for cli training
* Add error handling to threads. Add non-mp queues to queue_handler
* Improved Model Building and NNMeta
* refactor lib/models
* training refactor. DFL H128 model Implementation
* Dfaker - use hashes
* Move timelapse. Remove perceptual loss arg
* Update INSTALL.md. Add logger formatting. Update Dfaker training
* DFL h128 partially ported
* Add mask to dfaker (#573)
* Remove old models. Add mask to dfaker
* dfl mask. Make masks selectable in config (#575)
* DFL H128 Mask. Mask type selectable in config.
* remove gan_v2_2
* Creating Input Size config for models
Creating Input Size config for models
Will be used downstream in converters.
Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)
* Add mask loss options to config
* MTCNN options to config.ini. Remove GAN config. Update USAGE.md
* Add sliders for numerical values in GUI
* Add config plugins menu to gui. Validate config
* Only backup model if loss has dropped. Get training working again
* bugfixes
* Standardise loss printing
* GUI idle cpu fixes. Graph loss fix.
* mutli-gpu logging bugfix
* Merge branch 'staging' into train_refactor
* backup state file
* Crash protection: Only backup if both total losses have dropped
* Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes)
* Load and save model structure with weights
* Slight code update
* Improve config loader. Add subpixel opt to all models. Config to state
* Show samples... wrong input
* Remove AE topology. Add input/output shapes to State
* Port original_villain (birb/VillainGuy) model to faceswap
* Add plugin info to GUI config pages
* Load input shape from state. IAE Config options.
* Fix transform_kwargs.
Coverage to ratio.
Bugfix mask detection
* Suppress keras userwarnings.
Automate zoom.
Coverage_ratio to model def.
* Consolidation of converters & refactor (#574)
* Consolidation of converters & refactor
Initial Upload of alpha
Items
- consolidate convert_mased & convert_adjust into one converter
-add average color adjust to convert_masked
-allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size
-allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size
-eliminate redundant type conversions to avoid multiple round-off errors
-refactor loops for vectorization/speed
-reorganize for clarity & style changes
TODO
- bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now
- issues with mask border giving black ring at zero erosion .. investigate
- remove GAN ??
- test enlargment factors of umeyama standard face .. match to coverage factor
- make enlargment factor a model parameter
- remove convert_adjusted and referencing code when finished
* Update Convert_Masked.py
default blur size of 2 to match original...
description of enlargement tests
breakout matrxi scaling into def
* Enlargment scale as a cli parameter
* Update cli.py
* dynamic interpolation algorithm
Compute x & y scale factors from the affine matrix on the fly by QR decomp.
Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image
* input size
input size from config
* fix issues with <1.0 erosion
* Update convert.py
* Update Convert_Adjust.py
more work on the way to merginf
* Clean up help note on sharpen
* cleanup seamless
* Delete Convert_Adjust.py
* Update umeyama.py
* Update training_data.py
* swapping
* segmentation stub
* changes to convert.str
* Update masked.py
* Backwards compatibility fix for models
Get converter running
* Convert:
Move masks to class.
bugfix blur_size
some linting
* mask fix
* convert fixes
- missing facehull_rect re-added
- coverage to %
- corrected coverage logic
- cleanup of gui option ordering
* Update cli.py
* default for blur
* Update masked.py
* added preliminary low_mem version of OriginalHighRes model plugin
* Code cleanup, minor fixes
* Update masked.py
* Update masked.py
* Add dfl mask to convert
* histogram fix & seamless location
* update
* revert
* bugfix: Load actual configuration in gui
* Standardize nn_blocks
* Update cli.py
* Minor code amends
* Fix Original HiRes model
* Add masks to preview output for mask trainers
refactor trainer.__base.py
* Masked trainers converter support
* convert bugfix
* Bugfix: Converter for masked (dfl/dfaker) trainers
* Additional Losses (#592)
* initial upload
* Delete blur.py
* default initializer = He instead of Glorot (#588)
* Allow kernel_initializer to be overridable
* Add ICNR Initializer option for upscale on all models.
* Hopefully fixes RSoDs with original-highres model plugin
* remove debug line
* Original-HighRes model plugin Red Screen of Death fix, take #2
* Move global options to _base. Rename Villain model
* clipnorm and res block biases
* scale the end of res block
* res block
* dfaker pre-activation res
* OHRES pre-activation
* villain pre-activation
* tabs/space in nn_blocks
* fix for histogram with mask all set to zero
* fix to prevent two networks with same name
* GUI: Wider tooltips. Improve TQDM capture
* Fix regex bug
* Convert padding=48 to ratio of image size
* Add size option to alignments tool extract
* Pass through training image size to convert from model
* Convert: Pull training coverage from model
* convert: coverage, blur and erode to percent
* simplify matrix scaling
* ordering of sliders in train
* Add matrix scaling to utils. Use interpolation in lib.aligner transform
* masked.py Import get_matrix_scaling from utils
* fix circular import
* Update masked.py
* quick fix for matrix scaling
* testing thus for now
* tqdm regex capture bugfix
* Minor ammends
* blur size cleanup
* Remove coverage option from convert (Now cascades from model)
* Implement convert for all model types
* Add mask option and coverage option to all existing models
* bugfix for model loading on convert
* debug print removal
* Bugfix for masks in dfl_h128 and iae
* Update preview display. Add preview scaling to cli
* mask notes
* Delete training_data_v2.py
errant file
* training data variables
* Fix timelapse function
* Add new config items to state file for legacy purposes
* Slight GUI tweak
* Raise exception if problem with loaded model
* Add Tensorboard support (Logs stored in model directory)
* ICNR fix
* loss bugfix
* convert bugfix
* Move ini files to config folder. Make TensorBoard optional
* Fix training data for unbalanced inputs/outputs
* Fix config "none" test
* Keep helptext in .ini files when saving config from GUI
* Remove frame_dims from alignments
* Add no-flip and warp-to-landmarks cli options
* Revert OHR to RC4_fix version
* Fix lowmem mode on OHR model
* padding to variable
* Save models in parallel threads
* Speed-up of res_block stability
* Automated Reflection Padding
* Reflect Padding as a training option
Includes auto-calculation of proper padding shapes, input_shapes, output_shapes
Flag included in config now
* rest of reflect padding
* Move TB logging to cli. Session info to state file
* Add session iterations to state file
* Add recent files to menu. GUI code tidy up
* [GUI] Fix recent file list update issue
* Add correct loss names to TensorBoard logs
* Update live graph to use TensorBoard and remove animation
* Fix analysis tab. GUI optimizations
* Analysis Graph popup to Tensorboard Logs
* [GUI] Bug fix for graphing for models with hypens in name
* [GUI] Correctly split loss to tabs during training
* [GUI] Add loss type selection to analysis graph
* Fix store command name in recent files. Switch to correct tab on open
* [GUI] Disable training graph when 'no-logs' is selected
* Fix graphing race condition
* rename original_hires model to unbalanced
* Add ability to extract from and convert from a video file
* Update cli helptext. Add filebrowser button for GUI input
* Add video support to Alignments Tool