seqgra.evaluator.gradientbased.ebphelper module¶
- class EBAvgPool2d[source]¶
Bases:
torch.autograd.function.Function
- apply()¶
- static backward(ctx, grad_output)[source]¶
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- dirty_tensors¶
- static forward(ctx, inp, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
- is_traceable = False¶
- mark_dirty(*args)¶
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
forward()
method, and all arguments should be inputs.Every tensor that’s been modified in-place in a call to
forward()
should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.
- mark_non_differentiable(*args)¶
Marks outputs as non-differentiable.
This should be called at most once, only from inside the
forward()
method, and all arguments should be outputs.This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in
backward()
, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.This is used e.g. for indices returned from a max
Function
.
- materialize_grads¶
- metadata¶
- name()¶
- needs_input_grad¶
- next_functions¶
- non_differentiable¶
- register_hook()¶
- requires_grad¶
- save_for_backward(*tensors)¶
Saves given tensors for a future call to
backward()
.This should be called at most once, and only from inside the
forward()
method.Later, saved tensors can be accessed through the
saved_tensors
attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.Arguments can also be
None
.
- saved_tensors¶
- saved_variables¶
- set_materialize_grads(value)¶
Sets whether to materialize output grad tensors. Default is true.
This should be called only from inside the
forward()
methodIf true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the
backward()
method.
- to_save¶
- class EBConv2d[source]¶
Bases:
torch.autograd.function.Function
- apply()¶
- static backward(ctx, grad_output)[source]¶
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- dirty_tensors¶
- static forward(ctx, inp, weight, bias, stride, padding, dilation, groups)[source]¶
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
- is_traceable = False¶
- mark_dirty(*args)¶
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
forward()
method, and all arguments should be inputs.Every tensor that’s been modified in-place in a call to
forward()
should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.
- mark_non_differentiable(*args)¶
Marks outputs as non-differentiable.
This should be called at most once, only from inside the
forward()
method, and all arguments should be outputs.This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in
backward()
, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.This is used e.g. for indices returned from a max
Function
.
- materialize_grads¶
- metadata¶
- name()¶
- needs_input_grad¶
- next_functions¶
- non_differentiable¶
- register_hook()¶
- requires_grad¶
- save_for_backward(*tensors)¶
Saves given tensors for a future call to
backward()
.This should be called at most once, and only from inside the
forward()
method.Later, saved tensors can be accessed through the
saved_tensors
attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.Arguments can also be
None
.
- saved_tensors¶
- saved_variables¶
- set_materialize_grads(value)¶
Sets whether to materialize output grad tensors. Default is true.
This should be called only from inside the
forward()
methodIf true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the
backward()
method.
- to_save¶
- class EBLinear[source]¶
Bases:
torch.autograd.function.Function
- apply()¶
- static backward(ctx, grad_output)[source]¶
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- dirty_tensors¶
- static forward(ctx, inp, weight, bias=None)[source]¶
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
- is_traceable = False¶
- mark_dirty(*args)¶
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
forward()
method, and all arguments should be inputs.Every tensor that’s been modified in-place in a call to
forward()
should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.
- mark_non_differentiable(*args)¶
Marks outputs as non-differentiable.
This should be called at most once, only from inside the
forward()
method, and all arguments should be outputs.This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in
backward()
, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.This is used e.g. for indices returned from a max
Function
.
- materialize_grads¶
- metadata¶
- name()¶
- needs_input_grad¶
- next_functions¶
- non_differentiable¶
- register_hook()¶
- requires_grad¶
- save_for_backward(*tensors)¶
Saves given tensors for a future call to
backward()
.This should be called at most once, and only from inside the
forward()
method.Later, saved tensors can be accessed through the
saved_tensors
attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.Arguments can also be
None
.
- saved_tensors¶
- saved_variables¶
- set_materialize_grads(value)¶
Sets whether to materialize output grad tensors. Default is true.
This should be called only from inside the
forward()
methodIf true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the
backward()
method.
- to_save¶