mirror of
https://github.com/deepfakes/faceswap
synced 2025-06-07 10:43:27 -04:00
* model_refactor (#571) * original model to new structure * IAE model to new structure * OriginalHiRes to new structure * Fix trainer for different resolutions * Initial config implementation * Configparse library added * improved training data loader * dfaker model working * Add logging to training functions * Non blocking input for cli training * Add error handling to threads. Add non-mp queues to queue_handler * Improved Model Building and NNMeta * refactor lib/models * training refactor. DFL H128 model Implementation * Dfaker - use hashes * Move timelapse. Remove perceptual loss arg * Update INSTALL.md. Add logger formatting. Update Dfaker training * DFL h128 partially ported * Add mask to dfaker (#573) * Remove old models. Add mask to dfaker * dfl mask. Make masks selectable in config (#575) * DFL H128 Mask. Mask type selectable in config. * remove gan_v2_2 * Creating Input Size config for models Creating Input Size config for models Will be used downstream in converters. Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes) * Add mask loss options to config * MTCNN options to config.ini. Remove GAN config. Update USAGE.md * Add sliders for numerical values in GUI * Add config plugins menu to gui. Validate config * Only backup model if loss has dropped. Get training working again * bugfixes * Standardise loss printing * GUI idle cpu fixes. Graph loss fix. * mutli-gpu logging bugfix * Merge branch 'staging' into train_refactor * backup state file * Crash protection: Only backup if both total losses have dropped * Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes) * Load and save model structure with weights * Slight code update * Improve config loader. Add subpixel opt to all models. Config to state * Show samples... wrong input * Remove AE topology. Add input/output shapes to State * Port original_villain (birb/VillainGuy) model to faceswap * Add plugin info to GUI config pages * Load input shape from state. IAE Config options. * Fix transform_kwargs. Coverage to ratio. Bugfix mask detection * Suppress keras userwarnings. Automate zoom. Coverage_ratio to model def. * Consolidation of converters & refactor (#574) * Consolidation of converters & refactor Initial Upload of alpha Items - consolidate convert_mased & convert_adjust into one converter -add average color adjust to convert_masked -allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size -allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size -eliminate redundant type conversions to avoid multiple round-off errors -refactor loops for vectorization/speed -reorganize for clarity & style changes TODO - bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now - issues with mask border giving black ring at zero erosion .. investigate - remove GAN ?? - test enlargment factors of umeyama standard face .. match to coverage factor - make enlargment factor a model parameter - remove convert_adjusted and referencing code when finished * Update Convert_Masked.py default blur size of 2 to match original... description of enlargement tests breakout matrxi scaling into def * Enlargment scale as a cli parameter * Update cli.py * dynamic interpolation algorithm Compute x & y scale factors from the affine matrix on the fly by QR decomp. Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image * input size input size from config * fix issues with <1.0 erosion * Update convert.py * Update Convert_Adjust.py more work on the way to merginf * Clean up help note on sharpen * cleanup seamless * Delete Convert_Adjust.py * Update umeyama.py * Update training_data.py * swapping * segmentation stub * changes to convert.str * Update masked.py * Backwards compatibility fix for models Get converter running * Convert: Move masks to class. bugfix blur_size some linting * mask fix * convert fixes - missing facehull_rect re-added - coverage to % - corrected coverage logic - cleanup of gui option ordering * Update cli.py * default for blur * Update masked.py * added preliminary low_mem version of OriginalHighRes model plugin * Code cleanup, minor fixes * Update masked.py * Update masked.py * Add dfl mask to convert * histogram fix & seamless location * update * revert * bugfix: Load actual configuration in gui * Standardize nn_blocks * Update cli.py * Minor code amends * Fix Original HiRes model * Add masks to preview output for mask trainers refactor trainer.__base.py * Masked trainers converter support * convert bugfix * Bugfix: Converter for masked (dfl/dfaker) trainers * Additional Losses (#592) * initial upload * Delete blur.py * default initializer = He instead of Glorot (#588) * Allow kernel_initializer to be overridable * Add ICNR Initializer option for upscale on all models. * Hopefully fixes RSoDs with original-highres model plugin * remove debug line * Original-HighRes model plugin Red Screen of Death fix, take #2 * Move global options to _base. Rename Villain model * clipnorm and res block biases * scale the end of res block * res block * dfaker pre-activation res * OHRES pre-activation * villain pre-activation * tabs/space in nn_blocks * fix for histogram with mask all set to zero * fix to prevent two networks with same name * GUI: Wider tooltips. Improve TQDM capture * Fix regex bug * Convert padding=48 to ratio of image size * Add size option to alignments tool extract * Pass through training image size to convert from model * Convert: Pull training coverage from model * convert: coverage, blur and erode to percent * simplify matrix scaling * ordering of sliders in train * Add matrix scaling to utils. Use interpolation in lib.aligner transform * masked.py Import get_matrix_scaling from utils * fix circular import * Update masked.py * quick fix for matrix scaling * testing thus for now * tqdm regex capture bugfix * Minor ammends * blur size cleanup * Remove coverage option from convert (Now cascades from model) * Implement convert for all model types * Add mask option and coverage option to all existing models * bugfix for model loading on convert * debug print removal * Bugfix for masks in dfl_h128 and iae * Update preview display. Add preview scaling to cli * mask notes * Delete training_data_v2.py errant file * training data variables * Fix timelapse function * Add new config items to state file for legacy purposes * Slight GUI tweak * Raise exception if problem with loaded model * Add Tensorboard support (Logs stored in model directory) * ICNR fix * loss bugfix * convert bugfix * Move ini files to config folder. Make TensorBoard optional * Fix training data for unbalanced inputs/outputs * Fix config "none" test * Keep helptext in .ini files when saving config from GUI * Remove frame_dims from alignments * Add no-flip and warp-to-landmarks cli options * Revert OHR to RC4_fix version * Fix lowmem mode on OHR model * padding to variable * Save models in parallel threads * Speed-up of res_block stability * Automated Reflection Padding * Reflect Padding as a training option Includes auto-calculation of proper padding shapes, input_shapes, output_shapes Flag included in config now * rest of reflect padding * Move TB logging to cli. Session info to state file * Add session iterations to state file * Add recent files to menu. GUI code tidy up * [GUI] Fix recent file list update issue * Add correct loss names to TensorBoard logs * Update live graph to use TensorBoard and remove animation * Fix analysis tab. GUI optimizations * Analysis Graph popup to Tensorboard Logs * [GUI] Bug fix for graphing for models with hypens in name * [GUI] Correctly split loss to tabs during training * [GUI] Add loss type selection to analysis graph * Fix store command name in recent files. Switch to correct tab on open * [GUI] Disable training graph when 'no-logs' is selected * Fix graphing race condition * rename original_hires model to unbalanced
369 lines
14 KiB
Python
369 lines
14 KiB
Python
#!/usr/bin python3
|
|
""" Process wrapper for underlying faceswap commands for the GUI """
|
|
import os
|
|
import logging
|
|
import re
|
|
import signal
|
|
from subprocess import PIPE, Popen, TimeoutExpired
|
|
import sys
|
|
from threading import Thread
|
|
from time import time
|
|
|
|
import psutil
|
|
|
|
from .utils import get_config, get_images
|
|
|
|
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
|
|
|
|
|
|
class ProcessWrapper():
|
|
""" Builds command, launches and terminates the underlying
|
|
faceswap process. Updates GUI display depending on state """
|
|
|
|
def __init__(self, pathscript=None):
|
|
logger.debug("Initializing %s: (pathscript: %s)", self.__class__.__name__, pathscript)
|
|
self.tk_vars = get_config().tk_vars
|
|
self.set_callbacks()
|
|
self.pathscript = pathscript
|
|
self.command = None
|
|
self.statusbar = get_config().statusbar
|
|
self.task = FaceswapControl(self)
|
|
logger.debug("Initialized %s", self.__class__.__name__)
|
|
|
|
def set_callbacks(self):
|
|
""" Set the tk variable callbacks """
|
|
logger.debug("Setting tk variable traces")
|
|
self.tk_vars["action"].trace("w", self.action_command)
|
|
self.tk_vars["generate"].trace("w", self.generate_command)
|
|
|
|
def action_command(self, *args):
|
|
""" The action to perform when the action button is pressed """
|
|
if not self.tk_vars["action"].get():
|
|
return
|
|
category, command = self.tk_vars["action"].get().split(",")
|
|
|
|
if self.tk_vars["runningtask"].get():
|
|
self.task.terminate()
|
|
else:
|
|
self.command = command
|
|
args = self.prepare(category)
|
|
self.task.execute_script(command, args)
|
|
self.tk_vars["action"].set(None)
|
|
|
|
def generate_command(self, *args):
|
|
""" Generate the command line arguments and output """
|
|
if not self.tk_vars["generate"].get():
|
|
return
|
|
category, command = self.tk_vars["generate"].get().split(",")
|
|
args = self.build_args(category, command=command, generate=True)
|
|
self.tk_vars["consoleclear"].set(True)
|
|
logger.debug(" ".join(args))
|
|
print(" ".join(args))
|
|
self.tk_vars["generate"].set(None)
|
|
|
|
def prepare(self, category):
|
|
""" Prepare the environment for execution """
|
|
logger.debug("Preparing for execution")
|
|
self.tk_vars["runningtask"].set(True)
|
|
self.tk_vars["consoleclear"].set(True)
|
|
print("Loading...")
|
|
|
|
self.statusbar.status_message.set("Executing - {}.py".format(self.command))
|
|
mode = "indeterminate" if self.command in ("effmpeg", "train") else "determinate"
|
|
self.statusbar.progress_start(mode)
|
|
|
|
args = self.build_args(category)
|
|
self.tk_vars["display"].set(self.command)
|
|
logger.debug("Prepared for execution")
|
|
return args
|
|
|
|
def build_args(self, category, command=None, generate=False):
|
|
""" Build the faceswap command and arguments list """
|
|
logger.debug("Build cli arguments: (category: %s, command: %s, generate: %s)",
|
|
category, command, generate)
|
|
command = self.command if not command else command
|
|
script = "{}.{}".format(category, "py")
|
|
pathexecscript = os.path.join(self.pathscript, script)
|
|
|
|
args = [sys.executable] if generate else [sys.executable, "-u"]
|
|
args.extend([pathexecscript, command])
|
|
|
|
cli_opts = get_config().cli_opts
|
|
for cliopt in cli_opts.gen_cli_arguments(command):
|
|
args.extend(cliopt)
|
|
if command == "train" and not generate:
|
|
self.init_training_session(cliopt)
|
|
if not generate:
|
|
args.append("-gui") # Indicate to Faceswap that we are running the GUI
|
|
logger.debug("Built cli arguments: (%s)", args)
|
|
return args
|
|
|
|
@staticmethod
|
|
def init_training_session(cliopt):
|
|
""" Set the session stats for disable logging, model folder and model name """
|
|
session = get_config().session
|
|
if cliopt[0] == "-t":
|
|
session.modelname = cliopt[1].lower().replace("-", "_")
|
|
logger.debug("modelname: '%s'", session.modelname)
|
|
if cliopt[0] == "-m":
|
|
session.modeldir = cliopt[1]
|
|
logger.debug("modeldir: '%s'", session.modeldir)
|
|
|
|
def terminate(self, message):
|
|
""" Finalize wrapper when process has exited """
|
|
logger.debug("Terminating Faceswap processes")
|
|
self.tk_vars["runningtask"].set(False)
|
|
self.statusbar.progress_stop()
|
|
self.statusbar.status_message.set(message)
|
|
self.tk_vars["display"].set(None)
|
|
get_images().delete_preview()
|
|
get_config().session.__init__()
|
|
self.command = None
|
|
logger.debug("Terminated Faceswap processes")
|
|
print("Process exited.")
|
|
|
|
|
|
class FaceswapControl():
|
|
""" Control the underlying Faceswap tasks """
|
|
def __init__(self, wrapper):
|
|
logger.debug("Initializing %s", self.__class__.__name__)
|
|
self.wrapper = wrapper
|
|
self.statusbar = get_config().statusbar
|
|
self.command = None
|
|
self.args = None
|
|
self.process = None
|
|
self.train_stats = {"iterations": 0, "timestamp": None}
|
|
self.consoleregex = {
|
|
"loss": re.compile(r"([a-zA-Z_]+):.*?(\d+\.\d+)"),
|
|
"tqdm": re.compile(r".*?(?P<pct>\d+%).*?(?P<itm>\d+/\d+)\W\["
|
|
r"(?P<tme>\d+:\d+<.*),\W(?P<rte>.*)[a-zA-Z/]*\]")}
|
|
logger.debug("Initialized %s", self.__class__.__name__)
|
|
|
|
def execute_script(self, command, args):
|
|
""" Execute the requested Faceswap Script """
|
|
logger.debug("Executing Faceswap: (command: '%s', args: %s)", command, args)
|
|
self.command = command
|
|
kwargs = {"stdout": PIPE,
|
|
"stderr": PIPE,
|
|
"bufsize": 1,
|
|
"universal_newlines": True}
|
|
|
|
self.process = Popen(args, **kwargs, stdin=PIPE)
|
|
self.thread_stdout()
|
|
self.thread_stderr()
|
|
logger.debug("Executed Faceswap")
|
|
|
|
def read_stdout(self):
|
|
""" Read stdout from the subprocess. If training, pass the loss
|
|
values to Queue """
|
|
logger.debug("Opening stdout reader")
|
|
while True:
|
|
try:
|
|
output = self.process.stdout.readline()
|
|
except ValueError as err:
|
|
if str(err).lower().startswith("i/o operation on closed file"):
|
|
break
|
|
raise
|
|
if output == "" and self.process.poll() is not None:
|
|
break
|
|
if output:
|
|
if (self.command == "train" and self.capture_loss(output)) or (
|
|
self.command != "train" and self.capture_tqdm(output)):
|
|
continue
|
|
if self.command == "train" and output.strip().endswith("saved models"):
|
|
logger.debug("Trigger update preview")
|
|
self.wrapper.tk_vars["updatepreview"].set(True)
|
|
print(output.strip())
|
|
returncode = self.process.poll()
|
|
message = self.set_final_status(returncode)
|
|
self.wrapper.terminate(message)
|
|
logger.debug("Terminated stdout reader. returncode: %s", returncode)
|
|
|
|
def read_stderr(self):
|
|
""" Read stdout from the subprocess. If training, pass the loss
|
|
values to Queue """
|
|
logger.debug("Opening stderr reader")
|
|
while True:
|
|
try:
|
|
output = self.process.stderr.readline()
|
|
except ValueError as err:
|
|
if str(err).lower().startswith("i/o operation on closed file"):
|
|
break
|
|
raise
|
|
if output == "" and self.process.poll() is not None:
|
|
break
|
|
if output:
|
|
if self.command != "train" and self.capture_tqdm(output):
|
|
continue
|
|
print(output.strip(), file=sys.stderr)
|
|
logger.debug("Terminated stderr reader")
|
|
|
|
def thread_stdout(self):
|
|
""" Put the subprocess stdout so that it can be read without
|
|
blocking """
|
|
logger.debug("Threading stdout")
|
|
thread = Thread(target=self.read_stdout)
|
|
thread.daemon = True
|
|
thread.start()
|
|
logger.debug("Threaded stdout")
|
|
|
|
def thread_stderr(self):
|
|
""" Put the subprocess stderr so that it can be read without
|
|
blocking """
|
|
logger.debug("Threading stderr")
|
|
thread = Thread(target=self.read_stderr)
|
|
thread.daemon = True
|
|
thread.start()
|
|
logger.debug("Threaded stderr")
|
|
|
|
def capture_loss(self, string):
|
|
""" Capture loss values from stdout """
|
|
logger.trace("Capturing loss")
|
|
if not str.startswith(string, "["):
|
|
logger.trace("Not loss message. Returning False")
|
|
return False
|
|
|
|
loss = self.consoleregex["loss"].findall(string)
|
|
if len(loss) < 2:
|
|
logger.trace("Not loss message. Returning False")
|
|
return False
|
|
|
|
message = ""
|
|
for item in loss:
|
|
message += "{}: {} ".format(item[0], item[1])
|
|
if not message:
|
|
logger.trace("Error creating loss message. Returning False")
|
|
return False
|
|
|
|
iterations = self.train_stats["iterations"]
|
|
|
|
if iterations == 0:
|
|
# Initialize session stats and set initial timestamp
|
|
self.train_stats["timestamp"] = time()
|
|
|
|
if not get_config().session.initialized and iterations > 0:
|
|
# Don't initialize session until after the first iteration as state
|
|
# file must exist first
|
|
get_config().session.initialize_session(is_training=True)
|
|
self.wrapper.tk_vars["refreshgraph"].set(True)
|
|
|
|
iterations += 1
|
|
if iterations % 100 == 0:
|
|
self.wrapper.tk_vars["refreshgraph"].set(True)
|
|
self.train_stats["iterations"] = iterations
|
|
|
|
elapsed = self.calc_elapsed()
|
|
message = "Elapsed: {} Iteration: {} {}".format(elapsed,
|
|
self.train_stats["iterations"], message)
|
|
self.statusbar.progress_update(message, 0, False)
|
|
logger.trace("Succesfully captured loss: %s", message)
|
|
return True
|
|
|
|
def calc_elapsed(self):
|
|
""" Calculate and format time since training started """
|
|
now = time()
|
|
elapsed_time = now - self.train_stats["timestamp"]
|
|
try:
|
|
hrs = int(elapsed_time // 3600)
|
|
if hrs < 10:
|
|
hrs = "{0:02d}".format(hrs)
|
|
mins = "{0:02d}".format((int(elapsed_time % 3600) // 60))
|
|
secs = "{0:02d}".format((int(elapsed_time % 3600) % 60))
|
|
except ZeroDivisionError:
|
|
hrs = "00"
|
|
mins = "00"
|
|
secs = "00"
|
|
return "{}:{}:{}".format(hrs, mins, secs)
|
|
|
|
def capture_tqdm(self, string):
|
|
""" Capture tqdm output for progress bar """
|
|
logger.trace("Capturing tqdm")
|
|
tqdm = self.consoleregex["tqdm"].match(string)
|
|
if not tqdm:
|
|
return False
|
|
tqdm = tqdm.groupdict()
|
|
if any("?" in val for val in tqdm.values()):
|
|
logger.trace("tqdm initializing. Skipping")
|
|
return True
|
|
processtime = "Elapsed: {} Remaining: {}".format(tqdm["tme"].split("<")[0],
|
|
tqdm["tme"].split("<")[1])
|
|
message = "{} | {} | {} | {}".format(processtime,
|
|
tqdm["rte"],
|
|
tqdm["itm"],
|
|
tqdm["pct"])
|
|
|
|
current, total = tqdm["itm"].split("/")
|
|
position = int((float(current) / float(total)) * 1000)
|
|
|
|
self.statusbar.progress_update(message, position, True)
|
|
logger.trace("Succesfully captured tqdm message: %s", message)
|
|
return True
|
|
|
|
def terminate(self):
|
|
""" Terminate the subprocess """
|
|
logger.debug("Terminating wrapper")
|
|
if self.command == "train":
|
|
logger.debug("Sending Exit Signal")
|
|
print("Sending Exit Signal", flush=True)
|
|
try:
|
|
now = time()
|
|
if os.name == "nt":
|
|
try:
|
|
logger.debug("Sending carriage return to process")
|
|
self.process.communicate(input="\n", timeout=60)
|
|
except TimeoutExpired:
|
|
raise ValueError("Timeout reached sending Exit Signal")
|
|
else:
|
|
logger.debug("Sending SIGINT to process")
|
|
self.process.send_signal(signal.SIGINT)
|
|
while True:
|
|
timeelapsed = time() - now
|
|
if self.process.poll() is not None:
|
|
break
|
|
if timeelapsed > 60:
|
|
raise ValueError("Timeout reached sending Exit Signal")
|
|
return
|
|
except ValueError as err:
|
|
logger.error("Error terminating process", exc_info=True)
|
|
print(err)
|
|
else:
|
|
logger.debug("Terminating Process...")
|
|
print("Terminating Process...")
|
|
children = psutil.Process().children(recursive=True)
|
|
for child in children:
|
|
child.terminate()
|
|
_, alive = psutil.wait_procs(children, timeout=10)
|
|
if not alive:
|
|
logger.debug("Terminated")
|
|
print("Terminated")
|
|
return
|
|
|
|
logger.debug("Termination timed out. Killing Process...")
|
|
print("Termination timed out. Killing Process...")
|
|
for child in alive:
|
|
child.kill()
|
|
_, alive = psutil.wait_procs(alive, timeout=10)
|
|
if not alive:
|
|
logger.debug("Killed")
|
|
print("Killed")
|
|
else:
|
|
for child in alive:
|
|
msg = "Process {} survived SIGKILL. Giving up".format(child)
|
|
logger.debug(msg)
|
|
print(msg)
|
|
|
|
def set_final_status(self, returncode):
|
|
""" Set the status bar output based on subprocess return code """
|
|
logger.debug("Setting final status. returncode: %s", returncode)
|
|
if returncode in (0, 3221225786):
|
|
status = "Ready"
|
|
elif returncode == -15:
|
|
status = "Terminated - {}.py".format(self.command)
|
|
elif returncode == -9:
|
|
status = "Killed - {}.py".format(self.command)
|
|
elif returncode == -6:
|
|
status = "Aborted - {}.py".format(self.command)
|
|
else:
|
|
status = "Failed - {}.py. Return Code: {}".format(self.command, returncode)
|
|
logger.debug("Set final status: %s", status)
|
|
return status
|