- PEP8 Fixes
- Remove config for non NN Masks
- Tidy up defaults helptext
- cli.py fix typos
- Remove unused imports and functions _base.py
- Standardize input_size param
- Enable and update documentation
- Change references from `aligner` to `masker`
- Change input_size, output_size and coverage_ratio from kwargs to params
- Move load_aligned to batch input iterator
- Remove unnecessary self.input param
- Add softmax layer append function to KSession
- Remove references to KSession protected objects
- Standardize plugin output into finalize method
- Make masks full frame and add to lib.faces_detect
- Add masks to alignments.json (temporary zipped base64 solution)
* Standardize serialization
- Linting
- Standardize serializer use throughout code
- Extend serializer to load and save files
- Always load and save in utf-8
- Create documentation
* Move image utils to lib.image
* Add .pylintrc file
* Remove some cv2 pylint ignores
* TrainingData: Load images from disk in batches
* TrainingData: get_landmarks to batch
* TrainingData: transform and flip to batches
* TrainingData: Optimize color augmentation
* TrainingData: Optimize target and random_warp
* TrainingData - Convert _get_closest_match for batching
* TrainingData: Warp To Landmarks optimized
* Save models to threadpoolexecutor
* Move stack_images, Rename ImageManipulation. ImageAugmentation Docstrings
* Masks: Set dtype and threshold for lib.masks based on input face
* Docstrings and Documentation
* requirements.txt: - Pin opencv to 4.1.1 (fixes cv2-dnn error)
* lib.face_detect.DetectedFace: change LandmarksXY to landmarks_xy. Add left, right, top, bottom attributes
* lib.model.session: Session manager for loading models into different graphs (for Nvidia + CPU)
* plugins.extract._base: New parent class for all extract plugins
* plugins.extract.pipeline. Remove MultiProcessing. Dynamically limit batchsize for Nvidia cards. Remove loglevel input
* S3FD + FAN plugins. Standardise to Keras version for all backends
* Standardize all extract plugins to new threaded codebase
* Documentation. Start implementing Numpy style docstrings for Sphinx Documentation
* Remove s3fd_amd. Change convert OTF to expect DetectedFace object
* faces_detect - clean up and documentation
* Remove PoolProcess
* Migrate manual tool to new extract workflow
* Remove AMD specific extractor code from cli and plugins
* Sort tool to new extract workflow
* Remove multiprocessing from project
* Remove multiprocessing queues from QueueManager
* Remove multiprocessing support from logger
* Move face_filter to new extraction pipeline
* Alignments landmarksXY > landmarks_xy and legacy handling
* Intercept get_backend for sphinx doc build
# Add Sphinx documentation
* documentation, pep8, style, clarity updates
* Update cli.py
* Update _config.py
remove extra mask and coverage
mask type as dropdown
* Update training_data.py
move coverage / LR to global
cut down on loss description
style change
losses working in PR
* simpler logging
* legacy update
- Fix filter/nfilter for convert
- Remove existing snapshot folder before creating if it pre-exists
- Don't initialize training session until after first save. "Nonetype" session fix
* Training: Add Convolutional Aware Initialization config option
* Centralize Conv2D layer for handling initializer
* Add 'is-output' to NNMeta to indicate that network is an output to the Model
Tool: Add tool to restore models from backup
Snapshot: Create snapshot based on total iterations rather than session iterations
Models: Move backup/snapshot functions to lib/model
Training: Output average loss since last save at each save iteration
GUI: display_page.py: minor logging update
* Separate predict and implement pool
* Add check and raise error to multithreading
Box functions to config. Add crop box option.
* All masks to mask module. Refactor convert masks
Predicted mask passed from model. Cli update
* Intesect box with mask and fixes
* Use raw NN output for convert
Use raw mask for face adjustments. Split adjustments to pre and post warp
* Separate out adjustments. Add unmask sharpen
* Set sensible defaults. Pre PR Testing
* Fix queue sizes. Move masked.py to lib
* Fix Skip Frames. Fix GUI Config popup
* Sensible queue limits. Add a less resource intensive single processing mode
* Fix predicted mask. Amend smooth box defaults
* Deterministic ordering for video output
* Video to Video convert implemented
* Fixups
- Remove defaults from folders across all stages
- Move match-hist and aca into color adjustments selectable
- Handle crashes properly for pooled processes
- Fix output directory does not exist error when creating a new output folder
- Force output to frames if input is not a video
* Add Color Transfer adjustment method
Wrap info text in GUI plugin configure popup
* Refactor image adjustments. Easier to create plugins
Start implementing config options for video encoding
* Add video encoding config options
Allow video encoding for frames input (must pass in a reference video)
Move video and image output writers to plugins
* Image writers config options
Move scaling to cli
Move draw_transparent to images config
Add config options for cv2 writer
Add Pillow image writer
* Add gif filetype to Pillow. Fix draw transparent for Pillow
* Add Animated GIF writer
standardize opencv/pillow defaults
* [speedup] Pre-encode supported writers in the convert pool (opencv, pillow)
Move scaling to convert pool
Remove dfaker mask
* Fix default writer
* Bugfixes
* Better custom argparse formatting
* Mask optimizations
- Remove repetitive np.array() of items that are already np.arrays
- Remove repetitive reshaping
- Perform dilation on single channel only
- Clarify dfl_full mask parts
- Simplify with for loop
- Add more complex mask based on facial components
- Prep for convert masks
* Line width clarity
Line width clarity
* docstring
* style consistency
* flake8 fixes
* Update _config.py
descritions