fluid.regularizer¶
append_regularization_ops¶
-
paddle.fluid.regularizer.
append_regularization_ops
(parameters_and_grads, regularization=None) Create and add backward regularization Operators
Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.
Parameters: - parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.
- regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.
Returns: list of (parameters, gradients) pair with the regularized gradient
Return type: list[(Variable, Variable)]
Raises: Exception
– Unknown regularization type
L1Decay¶
-
paddle.fluid.regularizer.
L1Decay
alias of
L1DecayRegularizer
L2Decay¶
-
paddle.fluid.regularizer.
L2Decay
alias of
L2DecayRegularizer
L1DecayRegularizer¶
-
class
paddle.fluid.regularizer.
L1DecayRegularizer
(regularization_coeff=0.0) Implements the L1 Weight Decay Regularization
L1 regularization encourages sparsity.
\[L1WeightDecay = reg\_coeff * sign(parameter)\]Parameters: regularization_coeff (float) – regularization coeff Examples
program = fluid.framework.Program() block = program.global_block() mul_x = block.create_parameter( dtype="float32", shape=[5, 10], lod_level=0, name="mul.x", regularizer=fluid.regularizer.L1DecayRegularizer(0.5))
L2DecayRegularizer¶
-
class
paddle.fluid.regularizer.
L2DecayRegularizer
(regularization_coeff=0.0) Implements the L2 Weight Decay Regularization
Small values of L2 can help prevent over fitting the training data.
\[L2WeightDecay = reg\_coeff * parameter\]Parameters: regularization_coeff (float) – regularization coeff Examples
optimizer = fluid.optimizer.Adagrad( learning_rate=1e-4, regularization=fluid.regularizer.L2DecayRegularizer( regularization_coeff=0.1)) optimizer.minimize(avg_cost)