Supervised Trainer class
tefla.core.training.SupervisedTrainer (model, cnf, training_iterator=
Args
- model: model definition
- cnf: dict, training configs
- training_iterator: iterator to use for training data access, processing and augmentations
- validation_iterator: iterator to use for validation data access, processing and augmentations
- start_epoch: int, training start epoch; for resuming training provide the last
- epoch number to resume training from, its a required parameter for training data balancing
- resume_lr: float, learning rate to use for new training
- classification: bool, classificattion or regression
- clip_norm: bool, to clip gradient using gradient norm, stabilizes the training
- n_iters_per_epoch: int, number of iteratiosn for each epoch; e.g: total_training_samples/batch_size
- gpu_memory_fraction: amount of gpu memory to use
- is_summary: bool, to write summary or not
Methods
fit (data_set, weights_from=None, start_epoch=1, summary_every=10, weights_dir='weights', verbose=0)
Args
- data_set: dataset instance to use to access data for training/validation
- weights_from: str, if not None, initializes model from exisiting weights
- start_epoch: int, epoch number to start training from e.g. for retarining set the epoch number you want to resume training from
- summary_every: int, epoch interval to write summary; higher value means lower frequency of summary writing
- verbose: log level
Clips the gradients by the given value
tefla.core.training._clip_grad_norms (gradients_to_variables, max_norm=10)
Args
- gradients_to_variables: A list of gradient to variable pairs (tuples).
- max_norm: the maximum norm value.
Returns
A list of clipped gradient to variable pairs.
Clips the gradients by the given value
tefla.core.training.clip_grad_global_norms (tvars, loss, opt, global_norm=1, gate_gradients=1, gradient_noise_scale=4.0, GATE_GRAPH=2, grad_loss=None, agre_method=None, col_grad_ops=False)
Args
- tvars: trainable variables used for gradint updates
- loss: total loss of the network
- opt: optimizer
- global_norm: the maximum global norm
Returns
A list of clipped gradient to variable pairs.
Multiply specified gradients
tefla.core.training.multiply_gradients (grads_and_vars, gradient_multipliers)
Args
- grads_and_vars: A list of gradient to variable pairs (tuples).
- gradient_multipliers: A map from either
Variables
orVariable
op names - to the coefficient by which the associated gradient should be scaled.
Returns
The updated list of gradient to variable pairs.
Adds scaled noise from a 0-mean normal distribution to gradients
tefla.core.training.add_scaled_noise_to_gradients (grads_and_vars, gradient_noise_scale=10.0)
Args
- grads_and_vars: list of gradient and variables
- gardient_noise_scale: value of noise factor
Returns
noise added gradients