- Lint simple_tests.py
- Only reformat alignments file if it exists otherwise change filename
- Update legacy alignments to new format at all stages
- faces_detect.Mask.from_dict - logging format fix
- convert.py fix otf for new pipeline
- cli.py - Add note that masks not used. Revert convert masks
- faces_detect.py - Revert non-extract code
- Add .p and .pickle extensions for serializer
- plugins/extract revert some changes
- scripts/fsmedia - Revert code changes
- Pipeline - cleanup
- Consistant alpha channel stripping (fixes single-process)
- Store landmarks as numpy array
- Code attribution
- Normalize feed face and reference face to 0.0 - 1.0 in convert
- Lock in mask VRAM sized
- Add documentation to plugin_loader
- Update alignments tool to work with new format
* requirements.txt: - Pin opencv to 4.1.1 (fixes cv2-dnn error)
* lib.face_detect.DetectedFace: change LandmarksXY to landmarks_xy. Add left, right, top, bottom attributes
* lib.model.session: Session manager for loading models into different graphs (for Nvidia + CPU)
* plugins.extract._base: New parent class for all extract plugins
* plugins.extract.pipeline. Remove MultiProcessing. Dynamically limit batchsize for Nvidia cards. Remove loglevel input
* S3FD + FAN plugins. Standardise to Keras version for all backends
* Standardize all extract plugins to new threaded codebase
* Documentation. Start implementing Numpy style docstrings for Sphinx Documentation
* Remove s3fd_amd. Change convert OTF to expect DetectedFace object
* faces_detect - clean up and documentation
* Remove PoolProcess
* Migrate manual tool to new extract workflow
* Remove AMD specific extractor code from cli and plugins
* Sort tool to new extract workflow
* Remove multiprocessing from project
* Remove multiprocessing queues from QueueManager
* Remove multiprocessing support from logger
* Move face_filter to new extraction pipeline
* Alignments landmarksXY > landmarks_xy and legacy handling
* Intercept get_backend for sphinx doc build
# Add Sphinx documentation
Bugfix: Fully disable keypress monitor for GUI
Bugfix: Preview - Handle missing alignments file
Config changes:
- Separate plugin defaults into their own files
- Move mask_type to global training config
- Add ability to pass in custom config files
sysinfo,py: CUDNN Check fix for unsupported OSes
cli.py: More useful information
scripts/convert.py: Set processes to 1 if no images are passed in
cv2_dnn aligner: Fix bounding box for negative values
scripts/convert.py: Fix for incorrect frame ranges passed in
* Implement extraction pipeline
* Face filter to vgg_face. Resume partial model downloads
* On-the-fly conversion to extraction pipeline
* Move git model ids from get_model to model definition
Remove Dlib Aligner
Add CV2-DNN Aligner replacement for dlib
Remove aligner models from repo
Add download function to pull models from remote location
Minor FAN fixups
* fixed several bugs that prevented sort algorithms from beeing working in staging branch.
face-cnn, face-cnn-dissim and face-yaw now working again
blur, face, hist, hist-dissim already worked before
face-dissim still has an error
* applied requested changes from torzdf
-scale in detector has been used as a member variable and shared between multiple threads. however scale heavily depends on the concrete image. if thread change occurs between calculation of scale, actual detection and postprocessing, then random behaviour and exceptions might happen. especially in the case of input images with varying sizes this might cause negative effects
-wrong scale factor calculation:
scale factor was calculated as: scale = target / source
whereas target and source are variables for the total number of pixels in the image (= width * height)
However, later on it was applied on the individual dimensions/components as width/height independantly:
e.g.: dims = (int(width * scale), int(height * scale))
or: int(face.left() / scale),
therefore scaling factor often was too small and detection for very big input images was performed on very small intermediate images
Therefore calculation has been changed to:
scale = sqrt(target / source)