mirror of
https://github.com/deepfakes/faceswap
synced 2025-06-08 20:13:52 -04:00
* Core Updates - Remove lib.utils.keras_backend_quiet and replace with get_backend() where relevant - Document lib.gpu_stats and lib.sys_info - Remove call to GPUStats.is_plaidml from convert and replace with get_backend() - lib.gui.menu - typofix * Update Dependencies Bump Tensorflow Version Check * Port extraction to tf2 * Add custom import finder for loading Keras or tf.keras depending on backend * Add `tensorflow` to KerasFinder search path * Basic TF2 training running * model.initializers - docstring fix * Fix and pass tests for tf2 * Replace Keras backend tests with faceswap backend tests * Initial optimizers update * Monkey patch tf.keras optimizer * Remove custom Adam Optimizers and Memory Saving Gradients * Remove multi-gpu option. Add Distribution to cli * plugins.train.model._base: Add Mirror, Central and Default distribution strategies * Update tensorboard kwargs for tf2 * Penalized Loss - Fix for TF2 and AMD * Fix syntax for tf2.1 * requirements typo fix * Explicit None for clipnorm if using a distribution strategy * Fix penalized loss for distribution strategies * Update Dlight * typo fix * Pin to TF2.2 * setup.py - Install tensorflow from pip if not available in Conda * Add reduction options and set default for mirrored distribution strategy * Explicitly use default strategy rather than nullcontext * lib.model.backup_restore documentation * Remove mirrored strategy reduction method and default based on OS * Initial restructure - training * Remove PingPong Start model.base refactor * Model saving and resuming enabled * More tidying up of model.base * Enable backup and snapshotting * Re-enable state file Remove loss names from state file Fix print loss function Set snapshot iterations correctly * Revert original model to Keras Model structure rather than custom layer Output full model and sub model summary Change NNBlocks to callables rather than custom keras layers * Apply custom Conv2D layer * Finalize NNBlock restructure Update Dfaker blocks * Fix reloading model under a different distribution strategy * Pass command line arguments through to trainer * Remove training_opts from model and reference params directly * Tidy up model __init__ * Re-enable tensorboard logging Suppress "Model Not Compiled" warning * Fix timelapse * lib.model.nnblocks - Bugfix residual block Port dfaker bugfix original * dfl-h128 ported * DFL SAE ported * IAE Ported * dlight ported * port lightweight * realface ported * unbalanced ported * villain ported * lib.cli.args - Update Batchsize + move allow_growth to config * Remove output shape definition Get image sizes per side rather than globally * Strip mask input from encoder * Fix learn mask and output learned mask to preview * Trigger Allow Growth prior to setting strategy * Fix GUI Graphing * GUI - Display batchsize correctly + fix training graphs * Fix penalized loss * Enable mixed precision training * Update analysis displayed batch to match input * Penalized Loss - Multi-GPU Fix * Fix all losses for TF2 * Fix Reflect Padding * Allow different input size for each side of the model * Fix conv-aware initialization on reload * Switch allow_growth order * Move mixed_precision to cli * Remove distrubution strategies * Compile penalized loss sub-function into LossContainer * Bump default save interval to 250 Generate preview on first iteration but don't save Fix iterations to start at 1 instead of 0 Remove training deprecation warnings Bump some scripts.train loglevels * Add ability to refresh preview on demand on pop-up window * Enable refresh of training preview from GUI * Fix Convert Debug logging in Initializers * Fix Preview Tool * Update Legacy TF1 weights to TF2 Catch stats error on loading stats with missing logs * lib.gui.popup_configure - Make more responsive + document * Multiple Outputs supported in trainer Original Model - Mask output bugfix * Make universal inference model for convert Remove scaling from penalized mask loss (now handled at input to y_true) * Fix inference model to work properly with all models * Fix multi-scale output for convert * Fix clipnorm issue with distribution strategies Edit error message on OOM * Update plaidml losses * Add missing file * Disable gmsd loss for plaidnl * PlaidML - Basic training working * clipnorm rewriting for mixed-precision * Inference model creation bugfixes * Remove debug code * Bugfix: Default clipnorm to 1.0 * Remove all mask inputs from training code * Remove mask inputs from convert * GUI - Analysis Tab - Docstrings * Fix rate in totals row * lib.gui - Only update display pages if they have focus * Save the model on first iteration * plaidml - Fix SSIM loss with penalized loss * tools.alignments - Remove manual and fix jobs * GUI - Remove case formatting on help text * gui MultiSelect custom widget - Set default values on init * vgg_face2 - Move to plugins.extract.recognition and use plugins._base base class cli - Add global GPU Exclude Option tools.sort - Use global GPU Exlude option for backend lib.model.session - Exclude all GPUs when running in CPU mode lib.cli.launcher - Set backend to CPU mode when all GPUs excluded * Cascade excluded devices to GPU Stats * Explicit GPU selection for Train and Convert * Reduce Tensorflow Min GPU Multiprocessor Count to 4 * remove compat.v1 code from extract * Force TF to skip mixed precision compatibility check if GPUs have been filtered * Add notes to config for non-working AMD losses * Rasie error if forcing extract to CPU mode * Fix loading of legace dfl-sae weights + dfl-sae typo fix * Remove unused requirements Update sphinx requirements Fix broken rst file locations * docs: lib.gui.display * clipnorm amd condition check * documentation - gui.display_analysis * Documentation - gui.popup_configure * Documentation - lib.logger * Documentation - lib.model.initializers * Documentation - lib.model.layers * Documentation - lib.model.losses * Documentation - lib.model.nn_blocks * Documetation - lib.model.normalization * Documentation - lib.model.session * Documentation - lib.plaidml_stats * Documentation: lib.training_data * Documentation: lib.utils * Documentation: plugins.train.model._base * GUI Stats: prevent stats from using GPU * Documentation - Original Model * Documentation: plugins.model.trainer._base * linting * unit tests: initializers + losses * unit tests: nn_blocks * bugfix - Exclude gpu devices in train, not include * Enable Exclude-Gpus in Extract * Enable exclude gpus in tools * Disallow multiple plugin types in a single model folder * Automatically add exclude_gpus argument in for cpu backends * Cpu backend fixes * Relax optimizer test threshold * Default Train settings - Set mask to Extended * Update Extractor cli help text Update to Python 3.8 * Fix FAN to run on CPU * lib.plaidml_tools - typofix * Linux installer - check for curl * linux installer - typo fix
137 lines
4.9 KiB
Python
137 lines
4.9 KiB
Python
#!/usr/bin/env python3
|
|
""" Tests for Faceswap Custom Layers.
|
|
|
|
Adapted from Keras tests.
|
|
"""
|
|
|
|
|
|
import pytest
|
|
import numpy as np
|
|
from keras import Input, Model, backend as K
|
|
|
|
from numpy.testing import assert_allclose
|
|
|
|
from lib.model import layers, normalization
|
|
from lib.utils import get_backend
|
|
from tests.utils import has_arg
|
|
|
|
CONV_SHAPE = (3, 3, 256, 2048)
|
|
CONV_ID = get_backend().upper()
|
|
|
|
|
|
def layer_test(layer_cls, kwargs={}, input_shape=None, input_dtype=None,
|
|
input_data=None, expected_output=None,
|
|
expected_output_dtype=None, fixed_batch_size=False):
|
|
"""Test routine for a layer with a single input tensor
|
|
and single output tensor.
|
|
"""
|
|
# generate input data
|
|
if input_data is None:
|
|
assert input_shape
|
|
if not input_dtype:
|
|
input_dtype = K.floatx()
|
|
input_data_shape = list(input_shape)
|
|
for i, var_e in enumerate(input_data_shape):
|
|
if var_e is None:
|
|
input_data_shape[i] = np.random.randint(1, 4)
|
|
input_data = (10 * np.random.random(input_data_shape))
|
|
input_data = input_data.astype(input_dtype)
|
|
else:
|
|
if input_shape is None:
|
|
input_shape = input_data.shape
|
|
if input_dtype is None:
|
|
input_dtype = input_data.dtype
|
|
if expected_output_dtype is None:
|
|
expected_output_dtype = input_dtype
|
|
|
|
# instantiation
|
|
layer = layer_cls(**kwargs)
|
|
|
|
# test get_weights , set_weights at layer level
|
|
weights = layer.get_weights()
|
|
layer.set_weights(weights)
|
|
|
|
if isinstance(layer, (layers.ReflectionPadding2D, normalization.InstanceNormalization)):
|
|
layer.build(input_shape)
|
|
expected_output_shape = layer.compute_output_shape(input_shape)
|
|
|
|
# test in functional API
|
|
if fixed_batch_size:
|
|
inp = Input(batch_shape=input_shape, dtype=input_dtype)
|
|
else:
|
|
inp = Input(shape=input_shape[1:], dtype=input_dtype)
|
|
outp = layer(inp)
|
|
assert K.dtype(outp) == expected_output_dtype
|
|
|
|
# check with the functional API
|
|
model = Model(inp, outp)
|
|
|
|
actual_output = model.predict(input_data)
|
|
actual_output_shape = actual_output.shape
|
|
for expected_dim, actual_dim in zip(expected_output_shape,
|
|
actual_output_shape):
|
|
if expected_dim is not None:
|
|
assert expected_dim == actual_dim
|
|
|
|
if expected_output is not None:
|
|
assert_allclose(actual_output, expected_output, rtol=1e-3)
|
|
|
|
# test serialization, weight setting at model level
|
|
model_config = model.get_config()
|
|
recovered_model = model.__class__.from_config(model_config)
|
|
if model.weights:
|
|
weights = model.get_weights()
|
|
recovered_model.set_weights(weights)
|
|
_output = recovered_model.predict(input_data)
|
|
assert_allclose(_output, actual_output, rtol=1e-3)
|
|
|
|
# test training mode (e.g. useful when the layer has a
|
|
# different behavior at training and testing time).
|
|
if has_arg(layer.call, 'training'):
|
|
model.compile('rmsprop', 'mse')
|
|
model.train_on_batch(input_data, actual_output)
|
|
|
|
# test instantiation from layer config
|
|
layer_config = layer.get_config()
|
|
layer_config['batch_input_shape'] = input_shape
|
|
layer = layer.__class__.from_config(layer_config)
|
|
|
|
# for further checks in the caller function
|
|
return actual_output
|
|
|
|
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_pixel_shuffler(dummy): # pylint:disable=unused-argument
|
|
""" Pixel Shuffler layer test """
|
|
layer_test(layers.PixelShuffler, input_shape=(2, 4, 4, 1024))
|
|
|
|
|
|
@pytest.mark.skipif(get_backend() == "amd", reason="amd does not support this layer")
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_subpixel_upscaling(dummy): # pylint:disable=unused-argument
|
|
""" Sub Pixel up-scaling layer test """
|
|
layer_test(layers.SubPixelUpscaling, input_shape=(2, 4, 4, 1024))
|
|
|
|
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_reflection_padding_2d(dummy): # pylint:disable=unused-argument
|
|
""" Reflection Padding 2D layer test """
|
|
layer_test(layers.ReflectionPadding2D, input_shape=(2, 4, 4, 512))
|
|
|
|
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_global_min_pooling_2d(dummy): # pylint:disable=unused-argument
|
|
""" Global Min Pooling 2D layer test """
|
|
layer_test(layers.GlobalMinPooling2D, input_shape=(2, 4, 4, 1024))
|
|
|
|
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_global_std_pooling_2d(dummy): # pylint:disable=unused-argument
|
|
""" Global Standard Deviation Pooling 2D layer test """
|
|
layer_test(layers.GlobalStdDevPooling2D, input_shape=(2, 4, 4, 1024))
|
|
|
|
|
|
@pytest.mark.parametrize('dummy', [None], ids=[get_backend().upper()])
|
|
def test_l2_normalize(dummy): # pylint:disable=unused-argument
|
|
""" L2 Normalize layer test """
|
|
layer_test(layers.L2_normalize, kwargs={"axis": 1}, input_shape=(2, 4, 4, 1024))
|