1
0
Fork 0
mirror of https://github.com/deepfakes/faceswap synced 2025-06-07 10:37:19 -04:00
faceswap/lib/multithreading.py
torzdf cd00859c40
model_refactor (#571) (#572)
* model_refactor (#571)

* original model to new structure

* IAE model to new structure

* OriginalHiRes to new structure

* Fix trainer for different resolutions

* Initial config implementation

* Configparse library added

* improved training data loader

* dfaker model working

* Add logging to training functions

* Non blocking input for cli training

* Add error handling to threads. Add non-mp queues to queue_handler

* Improved Model Building and NNMeta

* refactor lib/models

* training refactor. DFL H128 model Implementation

* Dfaker - use hashes

* Move timelapse. Remove perceptual loss arg

* Update INSTALL.md. Add logger formatting. Update Dfaker training

* DFL h128 partially ported

* Add mask to dfaker (#573)

* Remove old models. Add mask to dfaker

* dfl mask. Make masks selectable in config (#575)

* DFL H128 Mask. Mask type selectable in config.

* remove gan_v2_2

* Creating Input Size config for models

Creating Input Size config for models

Will be used downstream in converters.

Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)

* Add mask loss options to config

* MTCNN options to config.ini. Remove GAN config. Update USAGE.md

* Add sliders for numerical values in GUI

* Add config plugins menu to gui. Validate config

* Only backup model if loss has dropped. Get training working again

* bugfixes

* Standardise loss printing

* GUI idle cpu fixes. Graph loss fix.

* mutli-gpu logging bugfix

* Merge branch 'staging' into train_refactor

* backup state file

* Crash protection: Only backup if both total losses have dropped

* Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes)

* Load and save model structure with weights

* Slight code update

* Improve config loader. Add subpixel opt to all models. Config to state

* Show samples... wrong input

* Remove AE topology. Add input/output shapes to State

* Port original_villain (birb/VillainGuy) model to faceswap

* Add plugin info to GUI config pages

* Load input shape from state. IAE Config options.

* Fix transform_kwargs.
Coverage to ratio.
Bugfix mask detection

* Suppress keras userwarnings.
Automate zoom.
Coverage_ratio to model def.

* Consolidation of converters & refactor (#574)

* Consolidation of converters & refactor

Initial Upload of alpha

Items
- consolidate convert_mased & convert_adjust into one converter
-add average color adjust to convert_masked
-allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size
-allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size
-eliminate redundant type conversions to avoid multiple round-off errors
-refactor loops for vectorization/speed
-reorganize for clarity & style changes

TODO
- bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now
- issues with mask border giving black ring at zero erosion .. investigate
- remove GAN ??
- test enlargment factors of umeyama standard face .. match to coverage factor
- make enlargment factor a model parameter
- remove convert_adjusted and referencing code when finished

* Update Convert_Masked.py

default blur size of 2 to match original...
description of enlargement tests
breakout matrxi scaling into def

* Enlargment scale as a cli parameter

* Update cli.py

* dynamic interpolation algorithm

Compute x & y scale factors from the affine matrix on the fly by QR decomp.
Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image

* input size
input size from config

* fix issues with <1.0 erosion

* Update convert.py

* Update Convert_Adjust.py

more work on the way to merginf

* Clean up help note on sharpen

* cleanup seamless

* Delete Convert_Adjust.py

* Update umeyama.py

* Update training_data.py

* swapping

* segmentation stub

* changes to convert.str

* Update masked.py

* Backwards compatibility fix for models
Get converter running

* Convert:
Move masks to class.
bugfix blur_size
some linting

* mask fix

* convert fixes

- missing facehull_rect re-added
- coverage to %
- corrected coverage logic
- cleanup of gui option ordering

* Update cli.py

* default for blur

* Update masked.py

* added preliminary low_mem version of OriginalHighRes model plugin

* Code cleanup, minor fixes

* Update masked.py

* Update masked.py

* Add dfl mask to convert

* histogram fix & seamless location

* update

* revert

* bugfix: Load actual configuration in gui

* Standardize nn_blocks

* Update cli.py

* Minor code amends

* Fix Original HiRes model

* Add masks to preview output for mask trainers
refactor trainer.__base.py

* Masked trainers converter support

* convert bugfix

* Bugfix: Converter for masked (dfl/dfaker) trainers

* Additional Losses (#592)

* initial upload

* Delete blur.py

* default initializer = He instead of Glorot (#588)

* Allow kernel_initializer to be overridable

* Add ICNR Initializer option for upscale on all models.

* Hopefully fixes RSoDs with original-highres model plugin

* remove debug line

* Original-HighRes model plugin Red Screen of Death fix, take #2

* Move global options to _base. Rename Villain model

* clipnorm and res block biases

* scale the end of res block

* res block

* dfaker pre-activation res

* OHRES pre-activation

* villain pre-activation

* tabs/space in nn_blocks

* fix for histogram with mask all set to zero

* fix to prevent two networks with same name

* GUI: Wider tooltips. Improve TQDM capture

* Fix regex bug

* Convert padding=48 to ratio of image size

* Add size option to alignments tool extract

* Pass through training image size to convert from model

* Convert: Pull training coverage from model

* convert: coverage, blur and erode to percent

* simplify matrix scaling

* ordering of sliders in train

* Add matrix scaling to utils. Use interpolation in lib.aligner transform

* masked.py Import get_matrix_scaling from utils

* fix circular import

* Update masked.py

* quick fix for matrix scaling

* testing thus for now

* tqdm regex capture bugfix

* Minor ammends

* blur size cleanup

* Remove coverage option from convert (Now cascades from model)

* Implement convert for all model types

* Add mask option and coverage option to all existing models

* bugfix for model loading on convert

* debug print removal

* Bugfix for masks in dfl_h128 and iae

* Update preview display. Add preview scaling to cli

* mask notes

* Delete training_data_v2.py

errant file

* training data variables

* Fix timelapse function

* Add new config items to state file for legacy purposes

* Slight GUI tweak

* Raise exception if problem with loaded model

* Add Tensorboard support (Logs stored in model directory)

* ICNR fix

* loss bugfix

* convert bugfix

* Move ini files to config folder. Make TensorBoard optional

* Fix training data for unbalanced inputs/outputs

* Fix config "none" test

* Keep helptext in .ini files when saving config from GUI

* Remove frame_dims from alignments

* Add no-flip and warp-to-landmarks cli options

* Revert OHR to RC4_fix version

* Fix lowmem mode on OHR model

* padding to variable

* Save models in parallel threads

* Speed-up of res_block stability

* Automated Reflection Padding

* Reflect Padding as a training option

Includes auto-calculation of proper padding shapes, input_shapes, output_shapes

Flag included in config now

* rest of reflect padding

* Move TB logging to cli. Session info to state file

* Add session iterations to state file

* Add recent files to menu. GUI code tidy up

* [GUI] Fix recent file list update issue

* Add correct loss names to TensorBoard logs

* Update live graph to use TensorBoard and remove animation

* Fix analysis tab. GUI optimizations

* Analysis Graph popup to Tensorboard Logs

* [GUI] Bug fix for graphing for models with hypens in name

* [GUI] Correctly split loss to tabs during training

* [GUI] Add loss type selection to analysis graph

* Fix store command name in recent files. Switch to correct tab on open

* [GUI] Disable training graph when 'no-logs' is selected

* Fix graphing race condition

* rename original_hires model to unbalanced
2019-02-09 18:35:12 +00:00

224 lines
8.9 KiB
Python

#!/usr/bin/env python3
""" Multithreading/processing utils for faceswap """
import logging
import multiprocessing as mp
import queue as Queue
import sys
import threading
from lib.logger import LOG_QUEUE, set_root_logger
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
_launched_processes = set() # pylint: disable=invalid-name
class PoolProcess():
""" Pool multiple processes """
def __init__(self, method, in_queue, out_queue, *args, processes=None, **kwargs):
self._name = method.__qualname__
logger.debug("Initializing %s: (target: '%s', processes: %s)",
self.__class__.__name__, self._name, processes)
self.procs = self.set_procs(processes)
ctx = mp.get_context("spawn")
self.pool = ctx.Pool(processes=self.procs)
self._method = method
self._kwargs = self.build_target_kwargs(in_queue, out_queue, kwargs)
self._args = args
logger.debug("Initialized %s: '%s'", self.__class__.__name__, self._name)
@staticmethod
def build_target_kwargs(in_queue, out_queue, kwargs):
""" Add standard kwargs to passed in kwargs list """
kwargs["log_init"] = set_root_logger
kwargs["log_queue"] = LOG_QUEUE
kwargs["in_queue"] = in_queue
kwargs["out_queue"] = out_queue
return kwargs
def set_procs(self, processes):
""" Set the number of processes to use """
if processes is None:
running_processes = len(mp.active_children())
processes = max(mp.cpu_count() - running_processes, 1)
logger.verbose("Processing '%s' in %s processes", self._name, processes)
return processes
def start(self):
""" Run the processing pool """
logging.debug("Pooling Processes: (target: '%s', args: %s, kwargs: %s)",
self._name, self._args, self._kwargs)
for idx in range(self.procs):
logger.debug("Adding process %s of %s to mp.Pool '%s'",
idx + 1, self.procs, self._name)
self.pool.apply_async(self._method, args=self._args, kwds=self._kwargs)
logging.debug("Pooled Processes: '%s'", self._name)
def join(self):
""" Join the process """
logger.debug("Joining Pooled Process: '%s'", self._name)
self.pool.close()
self.pool.join()
logger.debug("Joined Pooled Process: '%s'", self._name)
class SpawnProcess(mp.context.SpawnProcess):
""" Process in spawnable context
Must be spawnable to share CUDA across processes """
def __init__(self, target, in_queue, out_queue, *args, **kwargs):
name = target.__qualname__
logger.debug("Initializing %s: (target: '%s', args: %s, kwargs: %s)",
self.__class__.__name__, name, args, kwargs)
ctx = mp.get_context("spawn")
self.event = ctx.Event()
kwargs = self.build_target_kwargs(in_queue, out_queue, kwargs)
super().__init__(target=target, name=name, args=args, kwargs=kwargs)
self.daemon = True
logger.debug("Initialized %s: '%s'", self.__class__.__name__, name)
def build_target_kwargs(self, in_queue, out_queue, kwargs):
""" Add standard kwargs to passed in kwargs list """
kwargs["event"] = self.event
kwargs["log_init"] = set_root_logger
kwargs["log_queue"] = LOG_QUEUE
kwargs["in_queue"] = in_queue
kwargs["out_queue"] = out_queue
return kwargs
def start(self):
""" Add logging to start function """
logger.debug("Spawning Process: (name: '%s', args: %s, kwargs: %s, daemon: %s)",
self._name, self._args, self._kwargs, self.daemon)
super().start()
_launched_processes.add(self)
logger.debug("Spawned Process: (name: '%s', PID: %s)", self._name, self.pid)
def join(self, timeout=None):
""" Add logging to join function """
logger.debug("Joining Process: (name: '%s', PID: %s)", self._name, self.pid)
super().join(timeout=timeout)
_launched_processes.remove(self)
logger.debug("Joined Process: (name: '%s', PID: %s)", self._name, self.pid)
class FSThread(threading.Thread):
""" Subclass of thread that passes errors back to parent """
def __init__(self, group=None, target=None, name=None, # pylint: disable=too-many-arguments
args=(), kwargs=None, *, daemon=None):
super().__init__(group=group, target=target, name=name,
args=args, kwargs=kwargs, daemon=daemon)
self.err = None
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except Exception: # pylint: disable=broad-except
self.err = sys.exc_info()
logger.debug("Error in thread (%s): %s", self._name,
self.err[1].with_traceback(self.err[2]))
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self._target, self._args, self._kwargs
class MultiThread():
""" Threading for IO heavy ops
Catches errors in thread and rethrows to parent """
def __init__(self, target, *args, thread_count=1, name=None, **kwargs):
self._name = name if name else target.__name__
logger.debug("Initializing %s: (target: '%s', thread_count: %s)",
self.__class__.__name__, self._name, thread_count)
logger.trace("args: %s, kwargs: %s", args, kwargs)
self.daemon = True
self._thread_count = thread_count
self._threads = list()
self._target = target
self._args = args
self._kwargs = kwargs
logger.debug("Initialized %s: '%s'", self.__class__.__name__, self._name)
@property
def has_error(self):
""" Return true if a thread has errored, otherwise false """
return any(thread.err for thread in self._threads)
@property
def errors(self):
""" Return a list of thread errors """
return [thread.err for thread in self._threads]
def start(self):
""" Start a thread with the given method and args """
logger.debug("Starting thread(s): '%s'", self._name)
for idx in range(self._thread_count):
name = "{}_{}".format(self._name, idx)
logger.debug("Starting thread %s of %s: '%s'",
idx + 1, self._thread_count, name)
thread = FSThread(name=name,
target=self._target,
args=self._args,
kwargs=self._kwargs)
thread.daemon = self.daemon
thread.start()
self._threads.append(thread)
logger.debug("Started all threads '%s': %s", self._name, len(self._threads))
def join(self):
""" Join the running threads, catching and re-raising any errors """
logger.debug("Joining Threads: '%s'", self._name)
for thread in self._threads:
logger.debug("Joining Thread: '%s'", thread._name) # pylint: disable=protected-access
thread.join()
if thread.err:
logger.error("Caught exception in thread: '%s'",
thread._name) # pylint: disable=protected-access
raise thread.err[1].with_traceback(thread.err[2])
logger.debug("Joined all Threads: '%s'", self._name)
class BackgroundGenerator(threading.Thread):
""" Run a queue in the background. From:
https://stackoverflow.com/questions/7323664/ """
# See below why prefetch count is flawed
def __init__(self, generator, prefetch=1):
threading.Thread.__init__(self)
self.queue = Queue.Queue(maxsize=prefetch)
self.generator = generator
self.daemon = True
self.start()
def run(self):
""" Put until queue size is reached.
Note: put blocks only if put is called while queue has already
reached max size => this makes 2 prefetched items! One in the
queue, one waiting for insertion! """
for item in self.generator:
self.queue.put(item)
self.queue.put(None)
def iterator(self):
""" Iterate items out of the queue """
while True:
next_item = self.queue.get()
if next_item is None:
break
yield next_item
def terminate_processes():
""" Join all active processes on unexpected shutdown
If the process is doing long running work, make sure you
have a mechanism in place to terminate this work to avoid
long blocks
"""
logger.debug("Processes to join: %s", [process.name
for process in _launched_processes
if process.is_alive()])
for process in list(_launched_processes):
if process.is_alive():
process.join()