mirror of
https://github.com/deepfakes/faceswap
synced 2025-06-07 10:43:27 -04:00
255 lines
9.5 KiB
Text
255 lines
9.5 KiB
Text
# SOME DESCRIPTIVE TITLE.
|
|
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
|
|
# This file is distributed under the same license as the PACKAGE package.
|
|
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
|
|
#
|
|
#, fuzzy
|
|
msgid ""
|
|
msgstr ""
|
|
"Project-Id-Version: PACKAGE VERSION\n"
|
|
"Report-Msgid-Bugs-To: \n"
|
|
"POT-Creation-Date: 2024-03-28 18:04+0000\n"
|
|
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
|
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
|
"Language-Team: LANGUAGE <LL@li.org>\n"
|
|
"Language: \n"
|
|
"MIME-Version: 1.0\n"
|
|
"Content-Type: text/plain; charset=CHARSET\n"
|
|
"Content-Transfer-Encoding: 8bit\n"
|
|
|
|
#: lib/cli/args_train.py:30
|
|
msgid ""
|
|
"Train a model on extracted original (A) and swap (B) faces.\n"
|
|
"Training models can take a long time. Anything from 24hrs to over a week\n"
|
|
"Model plugins can be configured in the 'Settings' Menu"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:49 lib/cli/args_train.py:58
|
|
msgid "faces"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:51
|
|
msgid ""
|
|
"Input directory. A directory containing training images for face A. This is "
|
|
"the original face, i.e. the face that you want to remove and replace with "
|
|
"face B."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:60
|
|
msgid ""
|
|
"Input directory. A directory containing training images for face B. This is "
|
|
"the swap face, i.e. the face that you want to place onto the head of person "
|
|
"A."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:67 lib/cli/args_train.py:80 lib/cli/args_train.py:97
|
|
#: lib/cli/args_train.py:123 lib/cli/args_train.py:133
|
|
msgid "model"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:69
|
|
msgid ""
|
|
"Model directory. This is where the training data will be stored. You should "
|
|
"always specify a new folder for new models. If starting a new model, select "
|
|
"either an empty folder, or a folder which does not exist (which will be "
|
|
"created). If continuing to train an existing model, specify the location of "
|
|
"the existing model."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:82
|
|
msgid ""
|
|
"R|Load the weights from a pre-existing model into a newly created model. For "
|
|
"most models this will load weights from the Encoder of the given model into "
|
|
"the encoder of the newly created model. Some plugins may have specific "
|
|
"configuration options allowing you to load weights from other layers. "
|
|
"Weights will only be loaded when creating a new model. This option will be "
|
|
"ignored if you are resuming an existing model. Generally you will also want "
|
|
"to 'freeze-weights' whilst the rest of your model catches up with your "
|
|
"Encoder.\n"
|
|
"NB: Weights can only be loaded from models of the same plugin as you intend "
|
|
"to train."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:99
|
|
msgid ""
|
|
"R|Select which trainer to use. Trainers can be configured from the Settings "
|
|
"menu or the config folder.\n"
|
|
"L|original: The original model created by /u/deepfakes.\n"
|
|
"L|dfaker: 64px in/128px out model from dfaker. Enable 'warp-to-landmarks' "
|
|
"for full dfaker method.\n"
|
|
"L|dfl-h128: 128px in/out model from deepfacelab\n"
|
|
"L|dfl-sae: Adaptable model from deepfacelab\n"
|
|
"L|dlight: A lightweight, high resolution DFaker variant.\n"
|
|
"L|iae: A model that uses intermediate layers to try to get better details\n"
|
|
"L|lightweight: A lightweight model for low-end cards. Don't expect great "
|
|
"results. Can train as low as 1.6GB with batch size 8.\n"
|
|
"L|realface: A high detail, dual density model based on DFaker, with "
|
|
"customizable in/out resolution. The autoencoders are unbalanced so B>A swaps "
|
|
"won't work so well. By andenixa et al. Very configurable.\n"
|
|
"L|unbalanced: 128px in/out model from andenixa. The autoencoders are "
|
|
"unbalanced so B>A swaps won't work so well. Very configurable.\n"
|
|
"L|villain: 128px in/out model from villainguy. Very resource hungry (You "
|
|
"will require a GPU with a fair amount of VRAM). Good for details, but more "
|
|
"susceptible to color differences."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:125
|
|
msgid ""
|
|
"Output a summary of the model and exit. If a model folder is provided then a "
|
|
"summary of the saved model is displayed. Otherwise a summary of the model "
|
|
"that would be created by the chosen plugin and configuration settings is "
|
|
"displayed."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:135
|
|
msgid ""
|
|
"Freeze the weights of the model. Freezing weights means that some of the "
|
|
"parameters in the model will no longer continue to learn, but those that are "
|
|
"not frozen will continue to learn. For most models, this will freeze the "
|
|
"encoder, but some models may have configuration options for freezing other "
|
|
"layers."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:147 lib/cli/args_train.py:160
|
|
#: lib/cli/args_train.py:175 lib/cli/args_train.py:191
|
|
#: lib/cli/args_train.py:200
|
|
msgid "training"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:149
|
|
msgid ""
|
|
"Batch size. This is the number of images processed through the model for "
|
|
"each side per iteration. NB: As the model is fed 2 sides at a time, the "
|
|
"actual number of images within the model at any one time is double the "
|
|
"number that you set here. Larger batches require more GPU RAM."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:162
|
|
msgid ""
|
|
"Length of training in iterations. This is only really used for automation. "
|
|
"There is no 'correct' number of iterations a model should be trained for. "
|
|
"You should stop training when you are happy with the previews. However, if "
|
|
"you want the model to stop automatically at a set number of iterations, you "
|
|
"can set that value here."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:177
|
|
msgid ""
|
|
"R|Select the distribution stategy to use.\n"
|
|
"L|default: Use Tensorflow's default distribution strategy.\n"
|
|
"L|central-storage: Centralizes variables on the CPU whilst operations are "
|
|
"performed on 1 or more local GPUs. This can help save some VRAM at the cost "
|
|
"of some speed by not storing variables on the GPU. Note: Mixed-Precision is "
|
|
"not supported on multi-GPU setups.\n"
|
|
"L|mirrored: Supports synchronous distributed training across multiple local "
|
|
"GPUs. A copy of the model and all variables are loaded onto each GPU with "
|
|
"batches distributed to each GPU at each iteration."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:193
|
|
msgid ""
|
|
"Disables TensorBoard logging. NB: Disabling logs means that you will not be "
|
|
"able to use the graph or analysis for this session in the GUI."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:202
|
|
msgid ""
|
|
"Use the Learning Rate Finder to discover the optimal learning rate for "
|
|
"training. For new models, this will calculate the optimal learning rate for "
|
|
"the model. For existing models this will use the optimal learning rate that "
|
|
"was discovered when initializing the model. Setting this option will ignore "
|
|
"the manually configured learning rate (configurable in train settings)."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:215 lib/cli/args_train.py:225
|
|
msgid "Saving"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:216
|
|
msgid "Sets the number of iterations between each model save."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:227
|
|
msgid ""
|
|
"Sets the number of iterations before saving a backup snapshot of the model "
|
|
"in it's current state. Set to 0 for off."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:234 lib/cli/args_train.py:246
|
|
#: lib/cli/args_train.py:258
|
|
msgid "timelapse"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:236
|
|
msgid ""
|
|
"Optional for creating a timelapse. Timelapse will save an image of your "
|
|
"selected faces into the timelapse-output folder at every save iteration. "
|
|
"This should be the input folder of 'A' faces that you would like to use for "
|
|
"creating the timelapse. You must also supply a --timelapse-output and a --"
|
|
"timelapse-input-B parameter."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:248
|
|
msgid ""
|
|
"Optional for creating a timelapse. Timelapse will save an image of your "
|
|
"selected faces into the timelapse-output folder at every save iteration. "
|
|
"This should be the input folder of 'B' faces that you would like to use for "
|
|
"creating the timelapse. You must also supply a --timelapse-output and a --"
|
|
"timelapse-input-A parameter."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:260
|
|
msgid ""
|
|
"Optional for creating a timelapse. Timelapse will save an image of your "
|
|
"selected faces into the timelapse-output folder at every save iteration. If "
|
|
"the input folders are supplied but no output folder, it will default to your "
|
|
"model folder/timelapse/"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:269 lib/cli/args_train.py:276
|
|
msgid "preview"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:270
|
|
msgid "Show training preview output. in a separate window."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:278
|
|
msgid ""
|
|
"Writes the training result to a file. The image will be stored in the root "
|
|
"of your FaceSwap folder."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:285 lib/cli/args_train.py:295
|
|
#: lib/cli/args_train.py:305 lib/cli/args_train.py:315
|
|
msgid "augmentation"
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:287
|
|
msgid ""
|
|
"Warps training faces to closely matched Landmarks from the opposite face-set "
|
|
"rather than randomly warping the face. This is the 'dfaker' way of doing "
|
|
"warping."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:297
|
|
msgid ""
|
|
"To effectively learn, a random set of images are flipped horizontally. "
|
|
"Sometimes it is desirable for this not to occur. Generally this should be "
|
|
"left off except for during 'fit training'."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:307
|
|
msgid ""
|
|
"Color augmentation helps make the model less susceptible to color "
|
|
"differences between the A and B sets, at an increased training time cost. "
|
|
"Enable this option to disable color augmentation."
|
|
msgstr ""
|
|
|
|
#: lib/cli/args_train.py:317
|
|
msgid ""
|
|
"Warping is integral to training the Neural Network. This option should only "
|
|
"be enabled towards the very end of training to try to bring out more detail. "
|
|
"Think of it as 'fine-tuning'. Enabling this option from the beginning is "
|
|
"likely to kill a model and lead to terrible results."
|
|
msgstr ""
|