mirror of
https://github.com/deepfakes/faceswap
synced 2025-06-07 10:43:27 -04:00
* model_refactor (#571) * original model to new structure * IAE model to new structure * OriginalHiRes to new structure * Fix trainer for different resolutions * Initial config implementation * Configparse library added * improved training data loader * dfaker model working * Add logging to training functions * Non blocking input for cli training * Add error handling to threads. Add non-mp queues to queue_handler * Improved Model Building and NNMeta * refactor lib/models * training refactor. DFL H128 model Implementation * Dfaker - use hashes * Move timelapse. Remove perceptual loss arg * Update INSTALL.md. Add logger formatting. Update Dfaker training * DFL h128 partially ported * Add mask to dfaker (#573) * Remove old models. Add mask to dfaker * dfl mask. Make masks selectable in config (#575) * DFL H128 Mask. Mask type selectable in config. * remove gan_v2_2 * Creating Input Size config for models Creating Input Size config for models Will be used downstream in converters. Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes) * Add mask loss options to config * MTCNN options to config.ini. Remove GAN config. Update USAGE.md * Add sliders for numerical values in GUI * Add config plugins menu to gui. Validate config * Only backup model if loss has dropped. Get training working again * bugfixes * Standardise loss printing * GUI idle cpu fixes. Graph loss fix. * mutli-gpu logging bugfix * Merge branch 'staging' into train_refactor * backup state file * Crash protection: Only backup if both total losses have dropped * Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes) * Load and save model structure with weights * Slight code update * Improve config loader. Add subpixel opt to all models. Config to state * Show samples... wrong input * Remove AE topology. Add input/output shapes to State * Port original_villain (birb/VillainGuy) model to faceswap * Add plugin info to GUI config pages * Load input shape from state. IAE Config options. * Fix transform_kwargs. Coverage to ratio. Bugfix mask detection * Suppress keras userwarnings. Automate zoom. Coverage_ratio to model def. * Consolidation of converters & refactor (#574) * Consolidation of converters & refactor Initial Upload of alpha Items - consolidate convert_mased & convert_adjust into one converter -add average color adjust to convert_masked -allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size -allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size -eliminate redundant type conversions to avoid multiple round-off errors -refactor loops for vectorization/speed -reorganize for clarity & style changes TODO - bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now - issues with mask border giving black ring at zero erosion .. investigate - remove GAN ?? - test enlargment factors of umeyama standard face .. match to coverage factor - make enlargment factor a model parameter - remove convert_adjusted and referencing code when finished * Update Convert_Masked.py default blur size of 2 to match original... description of enlargement tests breakout matrxi scaling into def * Enlargment scale as a cli parameter * Update cli.py * dynamic interpolation algorithm Compute x & y scale factors from the affine matrix on the fly by QR decomp. Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image * input size input size from config * fix issues with <1.0 erosion * Update convert.py * Update Convert_Adjust.py more work on the way to merginf * Clean up help note on sharpen * cleanup seamless * Delete Convert_Adjust.py * Update umeyama.py * Update training_data.py * swapping * segmentation stub * changes to convert.str * Update masked.py * Backwards compatibility fix for models Get converter running * Convert: Move masks to class. bugfix blur_size some linting * mask fix * convert fixes - missing facehull_rect re-added - coverage to % - corrected coverage logic - cleanup of gui option ordering * Update cli.py * default for blur * Update masked.py * added preliminary low_mem version of OriginalHighRes model plugin * Code cleanup, minor fixes * Update masked.py * Update masked.py * Add dfl mask to convert * histogram fix & seamless location * update * revert * bugfix: Load actual configuration in gui * Standardize nn_blocks * Update cli.py * Minor code amends * Fix Original HiRes model * Add masks to preview output for mask trainers refactor trainer.__base.py * Masked trainers converter support * convert bugfix * Bugfix: Converter for masked (dfl/dfaker) trainers * Additional Losses (#592) * initial upload * Delete blur.py * default initializer = He instead of Glorot (#588) * Allow kernel_initializer to be overridable * Add ICNR Initializer option for upscale on all models. * Hopefully fixes RSoDs with original-highres model plugin * remove debug line * Original-HighRes model plugin Red Screen of Death fix, take #2 * Move global options to _base. Rename Villain model * clipnorm and res block biases * scale the end of res block * res block * dfaker pre-activation res * OHRES pre-activation * villain pre-activation * tabs/space in nn_blocks * fix for histogram with mask all set to zero * fix to prevent two networks with same name * GUI: Wider tooltips. Improve TQDM capture * Fix regex bug * Convert padding=48 to ratio of image size * Add size option to alignments tool extract * Pass through training image size to convert from model * Convert: Pull training coverage from model * convert: coverage, blur and erode to percent * simplify matrix scaling * ordering of sliders in train * Add matrix scaling to utils. Use interpolation in lib.aligner transform * masked.py Import get_matrix_scaling from utils * fix circular import * Update masked.py * quick fix for matrix scaling * testing thus for now * tqdm regex capture bugfix * Minor ammends * blur size cleanup * Remove coverage option from convert (Now cascades from model) * Implement convert for all model types * Add mask option and coverage option to all existing models * bugfix for model loading on convert * debug print removal * Bugfix for masks in dfl_h128 and iae * Update preview display. Add preview scaling to cli * mask notes * Delete training_data_v2.py errant file * training data variables * Fix timelapse function * Add new config items to state file for legacy purposes * Slight GUI tweak * Raise exception if problem with loaded model * Add Tensorboard support (Logs stored in model directory) * ICNR fix * loss bugfix * convert bugfix * Move ini files to config folder. Make TensorBoard optional * Fix training data for unbalanced inputs/outputs * Fix config "none" test * Keep helptext in .ini files when saving config from GUI * Remove frame_dims from alignments * Add no-flip and warp-to-landmarks cli options * Revert OHR to RC4_fix version * Fix lowmem mode on OHR model * padding to variable * Save models in parallel threads * Speed-up of res_block stability * Automated Reflection Padding * Reflect Padding as a training option Includes auto-calculation of proper padding shapes, input_shapes, output_shapes Flag included in config now * rest of reflect padding * Move TB logging to cli. Session info to state file * Add session iterations to state file * Add recent files to menu. GUI code tidy up * [GUI] Fix recent file list update issue * Add correct loss names to TensorBoard logs * Update live graph to use TensorBoard and remove animation * Fix analysis tab. GUI optimizations * Analysis Graph popup to Tensorboard Logs * [GUI] Bug fix for graphing for models with hypens in name * [GUI] Correctly split loss to tabs during training * [GUI] Add loss type selection to analysis graph * Fix store command name in recent files. Switch to correct tab on open * [GUI] Disable training graph when 'no-logs' is selected * Fix graphing race condition * rename original_hires model to unbalanced
289 lines
12 KiB
Python
289 lines
12 KiB
Python
#!/usr/bin/env python3
|
|
""" Normaliztion methods for faceswap.py
|
|
Code from:
|
|
shoanlu GAN: https://github.com/shaoanlu/faceswap-GAN"""
|
|
|
|
import sys
|
|
import inspect
|
|
|
|
from keras.engine import Layer, InputSpec
|
|
from keras import initializers, regularizers, constraints
|
|
from keras import backend as K
|
|
from keras.utils.generic_utils import get_custom_objects
|
|
|
|
|
|
def to_list(inp):
|
|
""" Convert to list """
|
|
if not isinstance(inp, (list, tuple)):
|
|
return [inp]
|
|
return list(inp)
|
|
|
|
|
|
class InstanceNormalization(Layer):
|
|
"""Instance normalization layer (Lei Ba et al, 2016, Ulyanov et al., 2016).
|
|
Normalize the activations of the previous layer at each step,
|
|
i.e. applies a transformation that maintains the mean activation
|
|
close to 0 and the activation standard deviation close to 1.
|
|
# Arguments
|
|
axis: Integer, the axis that should be normalized
|
|
(typically the features axis).
|
|
For instance, after a `Conv2D` layer with
|
|
`data_format="channels_first"`,
|
|
set `axis=1` in `InstanceNormalization`.
|
|
Setting `axis=None` will normalize all values in each instance of the batch.
|
|
Axis 0 is the batch dimension. `axis` cannot be set to 0 to avoid errors.
|
|
epsilon: Small float added to variance to avoid dividing by zero.
|
|
center: If True, add offset of `beta` to normalized tensor.
|
|
If False, `beta` is ignored.
|
|
scale: If True, multiply by `gamma`.
|
|
If False, `gamma` is not used.
|
|
When the next layer is linear (also e.g. `nn.relu`),
|
|
this can be disabled since the scaling
|
|
will be done by the next layer.
|
|
beta_initializer: Initializer for the beta weight.
|
|
gamma_initializer: Initializer for the gamma weight.
|
|
beta_regularizer: Optional regularizer for the beta weight.
|
|
gamma_regularizer: Optional regularizer for the gamma weight.
|
|
beta_constraint: Optional constraint for the beta weight.
|
|
gamma_constraint: Optional constraint for the gamma weight.
|
|
# Input shape
|
|
Arbitrary. Use the keyword argument `input_shape`
|
|
(tuple of integers, does not include the samples axis)
|
|
when using this layer as the first layer in a model.
|
|
# Output shape
|
|
Same shape as input.
|
|
# References
|
|
- [Layer Normalization](https://arxiv.org/abs/1607.06450)
|
|
- [Instance Normalization: The Missing Ingredient for Fast
|
|
Stylization](https://arxiv.org/abs/1607.08022)
|
|
"""
|
|
def __init__(self,
|
|
axis=None,
|
|
epsilon=1e-3,
|
|
center=True,
|
|
scale=True,
|
|
beta_initializer='zeros',
|
|
gamma_initializer='ones',
|
|
beta_regularizer=None,
|
|
gamma_regularizer=None,
|
|
beta_constraint=None,
|
|
gamma_constraint=None,
|
|
**kwargs):
|
|
self.beta = None
|
|
self.gamma = None
|
|
super(InstanceNormalization, self).__init__(**kwargs)
|
|
self.supports_masking = True
|
|
self.axis = axis
|
|
self.epsilon = epsilon
|
|
self.center = center
|
|
self.scale = scale
|
|
self.beta_initializer = initializers.get(beta_initializer)
|
|
self.gamma_initializer = initializers.get(gamma_initializer)
|
|
self.beta_regularizer = regularizers.get(beta_regularizer)
|
|
self.gamma_regularizer = regularizers.get(gamma_regularizer)
|
|
self.beta_constraint = constraints.get(beta_constraint)
|
|
self.gamma_constraint = constraints.get(gamma_constraint)
|
|
|
|
def build(self, input_shape):
|
|
ndim = len(input_shape)
|
|
if self.axis == 0:
|
|
raise ValueError('Axis cannot be zero')
|
|
|
|
if (self.axis is not None) and (ndim == 2):
|
|
raise ValueError('Cannot specify axis for rank 1 tensor')
|
|
|
|
self.input_spec = InputSpec(ndim=ndim)
|
|
|
|
if self.axis is None:
|
|
shape = (1,)
|
|
else:
|
|
shape = (input_shape[self.axis],)
|
|
|
|
if self.scale:
|
|
self.gamma = self.add_weight(shape=shape,
|
|
name='gamma',
|
|
initializer=self.gamma_initializer,
|
|
regularizer=self.gamma_regularizer,
|
|
constraint=self.gamma_constraint)
|
|
else:
|
|
self.gamma = None
|
|
if self.center:
|
|
self.beta = self.add_weight(shape=shape,
|
|
name='beta',
|
|
initializer=self.beta_initializer,
|
|
regularizer=self.beta_regularizer,
|
|
constraint=self.beta_constraint)
|
|
else:
|
|
self.beta = None
|
|
self.built = True
|
|
|
|
def call(self, inputs, training=None):
|
|
input_shape = K.int_shape(inputs)
|
|
reduction_axes = list(range(0, len(input_shape)))
|
|
|
|
if self.axis is not None:
|
|
del reduction_axes[self.axis]
|
|
|
|
del reduction_axes[0]
|
|
|
|
mean = K.mean(inputs, reduction_axes, keepdims=True)
|
|
stddev = K.std(inputs, reduction_axes, keepdims=True) + self.epsilon
|
|
normed = (inputs - mean) / stddev
|
|
|
|
broadcast_shape = [1] * len(input_shape)
|
|
if self.axis is not None:
|
|
broadcast_shape[self.axis] = input_shape[self.axis]
|
|
|
|
if self.scale:
|
|
broadcast_gamma = K.reshape(self.gamma, broadcast_shape)
|
|
normed = normed * broadcast_gamma
|
|
if self.center:
|
|
broadcast_beta = K.reshape(self.beta, broadcast_shape)
|
|
normed = normed + broadcast_beta
|
|
return normed
|
|
|
|
def get_config(self):
|
|
config = {
|
|
'axis': self.axis,
|
|
'epsilon': self.epsilon,
|
|
'center': self.center,
|
|
'scale': self.scale,
|
|
'beta_initializer': initializers.serialize(self.beta_initializer),
|
|
'gamma_initializer': initializers.serialize(self.gamma_initializer),
|
|
'beta_regularizer': regularizers.serialize(self.beta_regularizer),
|
|
'gamma_regularizer': regularizers.serialize(self.gamma_regularizer),
|
|
'beta_constraint': constraints.serialize(self.beta_constraint),
|
|
'gamma_constraint': constraints.serialize(self.gamma_constraint)
|
|
}
|
|
base_config = super(InstanceNormalization, self).get_config()
|
|
return dict(list(base_config.items()) + list(config.items()))
|
|
|
|
|
|
class GroupNormalization(Layer):
|
|
""" Group Normalization
|
|
from: shoanlu GAN: https://github.com/shaoanlu/faceswap-GAN"""
|
|
|
|
def __init__(self, axis=-1,
|
|
gamma_init='one', beta_init='zero',
|
|
gamma_regularizer=None, beta_regularizer=None,
|
|
epsilon=1e-6,
|
|
group=32,
|
|
data_format=None,
|
|
**kwargs):
|
|
self.beta = None
|
|
self.gamma = None
|
|
super(GroupNormalization, self).__init__(**kwargs)
|
|
|
|
self.axis = to_list(axis)
|
|
self.gamma_init = initializers.get(gamma_init)
|
|
self.beta_init = initializers.get(beta_init)
|
|
self.gamma_regularizer = regularizers.get(gamma_regularizer)
|
|
self.beta_regularizer = regularizers.get(beta_regularizer)
|
|
self.epsilon = epsilon
|
|
self.group = group
|
|
self.data_format = K.normalize_data_format(data_format)
|
|
|
|
self.supports_masking = True
|
|
|
|
def build(self, input_shape):
|
|
self.input_spec = [InputSpec(shape=input_shape)]
|
|
shape = [1 for _ in input_shape]
|
|
if self.data_format == 'channels_last':
|
|
channel_axis = -1
|
|
shape[channel_axis] = input_shape[channel_axis]
|
|
elif self.data_format == 'channels_first':
|
|
channel_axis = 1
|
|
shape[channel_axis] = input_shape[channel_axis]
|
|
# for i in self.axis:
|
|
# shape[i] = input_shape[i]
|
|
self.gamma = self.add_weight(shape=shape,
|
|
initializer=self.gamma_init,
|
|
regularizer=self.gamma_regularizer,
|
|
name='gamma')
|
|
self.beta = self.add_weight(shape=shape,
|
|
initializer=self.beta_init,
|
|
regularizer=self.beta_regularizer,
|
|
name='beta')
|
|
self.built = True
|
|
|
|
def call(self, inputs, mask=None):
|
|
input_shape = K.int_shape(inputs)
|
|
if len(input_shape) != 4 and len(input_shape) != 2:
|
|
raise ValueError('Inputs should have rank ' +
|
|
str(4) + " or " + str(2) +
|
|
'; Received input shape:', str(input_shape))
|
|
|
|
if len(input_shape) == 4:
|
|
if self.data_format == 'channels_last':
|
|
batch_size, height, width, channels = input_shape
|
|
if batch_size is None:
|
|
batch_size = -1
|
|
|
|
if channels < self.group:
|
|
raise ValueError('Input channels should be larger than group size' +
|
|
'; Received input channels: ' + str(channels) +
|
|
'; Group size: ' + str(self.group))
|
|
|
|
var_x = K.reshape(inputs, (batch_size,
|
|
height,
|
|
width,
|
|
self.group,
|
|
channels // self.group))
|
|
mean = K.mean(var_x, axis=[1, 2, 4], keepdims=True)
|
|
std = K.sqrt(K.var(var_x, axis=[1, 2, 4], keepdims=True) + self.epsilon)
|
|
var_x = (var_x - mean) / std
|
|
|
|
var_x = K.reshape(var_x, (batch_size, height, width, channels))
|
|
retval = self.gamma * var_x + self.beta
|
|
elif self.data_format == 'channels_first':
|
|
batch_size, channels, height, width = input_shape
|
|
if batch_size is None:
|
|
batch_size = -1
|
|
|
|
if channels < self.group:
|
|
raise ValueError('Input channels should be larger than group size' +
|
|
'; Received input channels: ' + str(channels) +
|
|
'; Group size: ' + str(self.group))
|
|
|
|
var_x = K.reshape(inputs, (batch_size,
|
|
self.group,
|
|
channels // self.group,
|
|
height,
|
|
width))
|
|
mean = K.mean(var_x, axis=[2, 3, 4], keepdims=True)
|
|
std = K.sqrt(K.var(var_x, axis=[2, 3, 4], keepdims=True) + self.epsilon)
|
|
var_x = (var_x - mean) / std
|
|
|
|
var_x = K.reshape(var_x, (batch_size, channels, height, width))
|
|
retval = self.gamma * var_x + self.beta
|
|
|
|
elif len(input_shape) == 2:
|
|
reduction_axes = list(range(0, len(input_shape)))
|
|
del reduction_axes[0]
|
|
batch_size, _ = input_shape
|
|
if batch_size is None:
|
|
batch_size = -1
|
|
|
|
mean = K.mean(inputs, keepdims=True)
|
|
std = K.sqrt(K.var(inputs, keepdims=True) + self.epsilon)
|
|
var_x = (inputs - mean) / std
|
|
|
|
retval = self.gamma * var_x + self.beta
|
|
return retval
|
|
|
|
def get_config(self):
|
|
config = {'epsilon': self.epsilon,
|
|
'axis': self.axis,
|
|
'gamma_init': initializers.serialize(self.gamma_init),
|
|
'beta_init': initializers.serialize(self.beta_init),
|
|
'gamma_regularizer': regularizers.serialize(self.gamma_regularizer),
|
|
'beta_regularizer': regularizers.serialize(self.gamma_regularizer),
|
|
'group': self.group}
|
|
base_config = super(GroupNormalization, self).get_config()
|
|
return dict(list(base_config.items()) + list(config.items()))
|
|
|
|
|
|
# Update normalizations into Keras custom objects
|
|
for name, obj in inspect.getmembers(sys.modules[__name__]):
|
|
if inspect.isclass(obj) and obj.__module__ == __name__:
|
|
get_custom_objects().update({name: obj})
|