1
0
Fork 0
mirror of https://github.com/deepfakes/faceswap synced 2025-06-08 20:13:52 -04:00
faceswap/tools/alignments/cli.py
torzdf d8557c1970
Faceswap 2.0 (#1045)
* Core Updates
    - Remove lib.utils.keras_backend_quiet and replace with get_backend() where relevant
    - Document lib.gpu_stats and lib.sys_info
    - Remove call to GPUStats.is_plaidml from convert and replace with get_backend()
    - lib.gui.menu - typofix

* Update Dependencies
Bump Tensorflow Version Check

* Port extraction to tf2

* Add custom import finder for loading Keras or tf.keras depending on backend

* Add `tensorflow` to KerasFinder search path

* Basic TF2 training running

* model.initializers - docstring fix

* Fix and pass tests for tf2

* Replace Keras backend tests with faceswap backend tests

* Initial optimizers update

* Monkey patch tf.keras optimizer

* Remove custom Adam Optimizers and Memory Saving Gradients

* Remove multi-gpu option. Add Distribution to cli

* plugins.train.model._base: Add Mirror, Central and Default distribution strategies

* Update tensorboard kwargs for tf2

* Penalized Loss - Fix for TF2 and AMD

* Fix syntax for tf2.1

* requirements typo fix

* Explicit None for clipnorm if using a distribution strategy

* Fix penalized loss for distribution strategies

* Update Dlight

* typo fix

* Pin to TF2.2

* setup.py - Install tensorflow from pip if not available in Conda

* Add reduction options and set default for mirrored distribution strategy

* Explicitly use default strategy rather than nullcontext

* lib.model.backup_restore documentation

* Remove mirrored strategy reduction method and default based on OS

* Initial restructure - training

* Remove PingPong
Start model.base refactor

* Model saving and resuming enabled

* More tidying up of model.base

* Enable backup and snapshotting

* Re-enable state file
Remove loss names from state file
Fix print loss function
Set snapshot iterations correctly

* Revert original model to Keras Model structure rather than custom layer
Output full model and sub model summary
Change NNBlocks to callables rather than custom keras layers

* Apply custom Conv2D layer

* Finalize NNBlock restructure
Update Dfaker blocks

* Fix reloading model under a different distribution strategy

* Pass command line arguments through to trainer

* Remove training_opts from model and reference params directly

* Tidy up model __init__

* Re-enable tensorboard logging
Suppress "Model Not Compiled" warning

* Fix timelapse

* lib.model.nnblocks - Bugfix residual block
Port dfaker
bugfix original

* dfl-h128 ported

* DFL SAE ported

* IAE Ported

* dlight ported

* port lightweight

* realface ported

* unbalanced ported

* villain ported

* lib.cli.args - Update Batchsize + move allow_growth to config

* Remove output shape definition
Get image sizes per side rather than globally

* Strip mask input from encoder

* Fix learn mask and output learned mask to preview

* Trigger Allow Growth prior to setting strategy

* Fix GUI Graphing

* GUI - Display batchsize correctly + fix training graphs

* Fix penalized loss

* Enable mixed precision training

* Update analysis displayed batch to match input

* Penalized Loss - Multi-GPU Fix

* Fix all losses for TF2

* Fix Reflect Padding

* Allow different input size for each side of the model

* Fix conv-aware initialization on reload

* Switch allow_growth order

* Move mixed_precision to cli

* Remove distrubution strategies

* Compile penalized loss sub-function into LossContainer

* Bump default save interval to 250
Generate preview on first iteration but don't save
Fix iterations to start at 1 instead of 0
Remove training deprecation warnings
Bump some scripts.train loglevels

* Add ability to refresh preview on demand on pop-up window

* Enable refresh of training preview from GUI

* Fix Convert
Debug logging in Initializers

* Fix Preview Tool

* Update Legacy TF1 weights to TF2
Catch stats error on loading stats with missing logs

* lib.gui.popup_configure - Make more responsive + document

* Multiple Outputs supported in trainer
Original Model - Mask output bugfix

* Make universal inference model for convert
Remove scaling from penalized mask loss (now handled at input to y_true)

* Fix inference model to work properly with all models

* Fix multi-scale output for convert

* Fix clipnorm issue with distribution strategies
Edit error message on OOM

* Update plaidml losses

* Add missing file

* Disable gmsd loss for plaidnl

* PlaidML - Basic training working

* clipnorm rewriting for mixed-precision

* Inference model creation bugfixes

* Remove debug code

* Bugfix: Default clipnorm to 1.0

* Remove all mask inputs from training code

* Remove mask inputs from convert

* GUI - Analysis Tab - Docstrings

* Fix rate in totals row

* lib.gui - Only update display pages if they have focus

* Save the model on first iteration

* plaidml - Fix SSIM loss with penalized loss

* tools.alignments - Remove manual and fix jobs

* GUI - Remove case formatting on help text

* gui MultiSelect custom widget - Set default values on init

* vgg_face2 - Move to plugins.extract.recognition and use plugins._base base class
cli - Add global GPU Exclude Option
tools.sort - Use global GPU Exlude option for backend
lib.model.session - Exclude all GPUs when running in CPU mode
lib.cli.launcher - Set backend to CPU mode when all GPUs excluded

* Cascade excluded devices to GPU Stats

* Explicit GPU selection for Train and Convert

* Reduce Tensorflow Min GPU Multiprocessor Count to 4

* remove compat.v1 code from extract

* Force TF to skip mixed precision compatibility check if GPUs have been filtered

* Add notes to config for non-working AMD losses

* Rasie error if forcing extract to CPU mode

* Fix loading of legace dfl-sae weights + dfl-sae typo fix

* Remove unused requirements
Update sphinx requirements
Fix broken rst file locations

* docs: lib.gui.display

* clipnorm amd condition check

* documentation - gui.display_analysis

* Documentation - gui.popup_configure

* Documentation - lib.logger

* Documentation - lib.model.initializers

* Documentation - lib.model.layers

* Documentation - lib.model.losses

* Documentation - lib.model.nn_blocks

* Documetation - lib.model.normalization

* Documentation - lib.model.session

* Documentation - lib.plaidml_stats

* Documentation: lib.training_data

* Documentation: lib.utils

* Documentation: plugins.train.model._base

* GUI Stats: prevent stats from using GPU

* Documentation - Original Model

* Documentation: plugins.model.trainer._base

* linting

* unit tests: initializers + losses

* unit tests: nn_blocks

* bugfix - Exclude gpu devices in train, not include

* Enable Exclude-Gpus in Extract

* Enable exclude gpus in tools

* Disallow multiple plugin types in a single model folder

* Automatically add exclude_gpus argument in for cpu backends

* Cpu backend fixes

* Relax optimizer test threshold

* Default Train settings - Set mask to Extended

* Update Extractor cli help text
Update to Python 3.8

* Fix FAN to run on CPU

* lib.plaidml_tools - typofix

* Linux installer - check for curl

* linux installer - typo fix
2020-08-12 10:36:41 +01:00

149 lines
8 KiB
Python

#!/usr/bin/env python3
""" Command Line Arguments for tools """
from lib.cli.args import FaceSwapArgs
from lib.cli.actions import DirOrFileFullPaths, DirFullPaths, FilesFullPaths, Radio, Slider
_HELPTEXT = "This command lets you perform various tasks pertaining to an alignments file."
class AlignmentsArgs(FaceSwapArgs):
""" Class to parse the command line arguments for Alignments tool """
@staticmethod
def get_info():
""" Return command information """
return ("Alignments tool\nThis tool allows you to perform numerous actions on or using an "
"alignments file against its corresponding faceset/frame source.")
def get_argument_list(self):
frames_dir = " Must Pass in a frames folder/source video file (-fr)."
faces_dir = " Must Pass in a faces folder (-fc)."
frames_or_faces_dir = (" Must Pass in either a frames folder/source video file OR a"
"faces folder (-fr or -fc).")
frames_and_faces_dir = (" Must Pass in a frames folder/source video file AND a faces "
"folder (-fr and -fc).")
output_opts = " Use the output option (-o) to process results."
align_eyes = " Can optionally use the align-eyes switch (-ae)."
argument_list = list()
argument_list.append(dict(
opts=("-j", "--job"),
action=Radio,
type=str,
choices=("dfl", "draw", "extract", "merge", "missing-alignments", "missing-frames",
"leftover-faces", "multi-faces", "no-faces", "remove-faces", "remove-frames",
"rename", "sort", "spatial", "update-hashes"),
required=True,
help="R|Choose which action you want to perform. NB: All actions require an "
"alignments file (-a) to be passed in."
"\nL|'dfl': Create an alignments file from faces extracted from DeepFaceLab. "
"Specify 'dfl' as the 'alignments file' entry and the folder containing the dfl "
"faces as the 'faces folder' ('-a dfl -fc <source faces folder>')"
"\nL|'draw': Draw landmarks on frames in the selected folder/video. A subfolder "
"will be created within the frames folder to hold the output.{0}"
"\nL|'extract': Re-extract faces from the source frames/video based on alignment "
"data. This is a lot quicker than re-detecting faces. Can pass in the '-een' "
"(--extract-every-n) parameter to only extract every nth frame.{1}{2}"
"\nL|'merge': Merge multiple alignment files into one. Specify a space separated "
"list of alignments files with the -a flag. Optionally specify a faces (-fc) "
"folder to filter the final alignments file to only those faces that appear "
"within the provided folder."
"\nL|'missing-alignments': Identify frames that do not exist in the alignments "
"file.{3}{0}"
"\nL|'missing-frames': Identify frames in the alignments file that do not appear "
"within the frames folder/video.{3}{0}"
"\nL|'leftover-faces': Identify faces in the faces folder that do not exist in "
"the alignments file.{3}{4}"
"\nL|'multi-faces': Identify where multiple faces exist within the alignments "
"file.{3}{5}"
"\nL|'no-faces': Identify frames that exist within the alignment file but no "
"faces were detected.{3}{0}"
"\nL|'remove-faces': Remove deleted faces from an alignments file. The original "
"alignments file will be backed up.{4}"
"\nL|'remove-frames': Remove deleted frames from an alignments file. The "
"original alignments file will be backed up.{0}"
"\nL|'rename' - Rename faces to correspond with their parent frame and position "
"index in the alignments file (i.e. how they are named after running extract).{4}"
"\nL|'sort': Re-index the alignments from left to right. For alignments with "
"multiple faces this will ensure that the left-most face is at index 0 "
"Optionally pass in a faces folder (-fc) to also rename extracted faces."
"\nL|'spatial': Perform spatial and temporal filtering to smooth alignments "
"(EXPERIMENTAL!)"
"\nL|'update-hashes': Recalculate the face hashes. Only use this if you have "
"altered the extracted faces (e.g. colour adjust). The files MUST be named "
"'<frame_name>_face index' (i.e. how they are named after running extract)."
"{4}".format(frames_dir, frames_and_faces_dir, align_eyes, output_opts,
faces_dir, frames_or_faces_dir)))
argument_list.append(dict(
opts=("-a", "--alignments_file"),
action=FilesFullPaths,
dest="alignments_file",
nargs="+",
group="data",
required=True,
filetypes="alignments",
help="Full path to the alignments file to be processed. If merging alignments, then "
"multiple files can be selected, space separated"))
argument_list.append(dict(
opts=("-fc", "-faces_folder"),
action=DirFullPaths,
dest="faces_dir",
group="data",
help="Directory containing extracted faces."))
argument_list.append(dict(
opts=("-fr", "-frames_folder"),
action=DirOrFileFullPaths,
dest="frames_dir",
filetypes="video",
group="data",
help="Directory containing source frames that faces were extracted from."))
argument_list.append(dict(
opts=("-o", "--output"),
action=Radio,
type=str,
choices=("console", "file", "move"),
group="processing",
default="console",
help="R|How to output discovered items ('faces' and 'frames' only):"
"\nL|'console': Print the list of frames to the screen. (DEFAULT)"
"\nL|'file': Output the list of frames to a text file (stored within the source "
"directory)."
"\nL|'move': Move the discovered items to a sub-folder within the source "
"directory."))
argument_list.append(dict(
opts=("-een", "--extract-every-n"),
type=int,
action=Slider,
dest="extract_every_n",
min_max=(1, 100),
default=1,
rounding=1,
group="extract",
help="[Extract only] Extract every 'nth' frame. This option will skip frames when "
"extracting faces. For example a value of 1 will extract faces from every frame, "
"a value of 10 will extract faces from every 10th frame."))
argument_list.append(dict(
opts=("-sz", "--size"),
type=int,
action=Slider,
min_max=(128, 512),
default=256,
group="extract",
rounding=64,
help="[Extract only] The output size of extracted faces."))
argument_list.append(dict(
opts=("-ae", "--align-eyes"),
action="store_true",
dest="align_eyes",
group="extract",
default=False,
help="[Extract only] Perform extra alignment to ensure left/right eyes are at the "
"same height."))
argument_list.append(dict(
opts=("-l", "--large"),
action="store_true",
group="extract",
default=False,
help="[Extract only] Only extract faces that have not been upscaled to the required "
"size (`-sz`, `--size). Useful for excluding low-res images from a training "
"set."))
return argument_list