* requirements.txt: - Pin opencv to 4.1.1 (fixes cv2-dnn error)
* lib.face_detect.DetectedFace: change LandmarksXY to landmarks_xy. Add left, right, top, bottom attributes
* lib.model.session: Session manager for loading models into different graphs (for Nvidia + CPU)
* plugins.extract._base: New parent class for all extract plugins
* plugins.extract.pipeline. Remove MultiProcessing. Dynamically limit batchsize for Nvidia cards. Remove loglevel input
* S3FD + FAN plugins. Standardise to Keras version for all backends
* Standardize all extract plugins to new threaded codebase
* Documentation. Start implementing Numpy style docstrings for Sphinx Documentation
* Remove s3fd_amd. Change convert OTF to expect DetectedFace object
* faces_detect - clean up and documentation
* Remove PoolProcess
* Migrate manual tool to new extract workflow
* Remove AMD specific extractor code from cli and plugins
* Sort tool to new extract workflow
* Remove multiprocessing from project
* Remove multiprocessing queues from QueueManager
* Remove multiprocessing support from logger
* Move face_filter to new extraction pipeline
* Alignments landmarksXY > landmarks_xy and legacy handling
* Intercept get_backend for sphinx doc build
# Add Sphinx documentation
* Implement extraction pipeline
* Face filter to vgg_face. Resume partial model downloads
* On-the-fly conversion to extraction pipeline
* Move git model ids from get_model to model definition
Remove Dlib Aligner
Add CV2-DNN Aligner replacement for dlib
Remove aligner models from repo
Add download function to pull models from remote location
Minor FAN fixups
-scale in detector has been used as a member variable and shared between multiple threads. however scale heavily depends on the concrete image. if thread change occurs between calculation of scale, actual detection and postprocessing, then random behaviour and exceptions might happen. especially in the case of input images with varying sizes this might cause negative effects
-wrong scale factor calculation:
scale factor was calculated as: scale = target / source
whereas target and source are variables for the total number of pixels in the image (= width * height)
However, later on it was applied on the individual dimensions/components as width/height independantly:
e.g.: dims = (int(width * scale), int(height * scale))
or: int(face.left() / scale),
therefore scaling factor often was too small and detection for very big input images was performed on very small intermediate images
Therefore calculation has been changed to:
scale = sqrt(target / source)