Layers

normalizing flow layers
from fastai.imports import *

source

regist_layer

 regist_layer (layer_class)

source

get_flow_layer

 get_flow_layer (layer_name:str)

Convolutions

Pointwise Convs


source

PointwiseConvs

 PointwiseConvs (in_features=1, out_features=1, feats=32, device='cpu',
                 name='pointwise_convs')

Pointwise convolutional module for neural networks.

This module consists of a series of pointwise convolutions with instance normalization and LeakyReLU activation functions.

Attributes: name (str): Name of the module. device (str): Device to run computations on. body (nn.Sequential): Sequential module containing the layers.

Methods: _get_basic_module(in_ch, out_ch, k_size=1, stride=1, padding=1, negative_slope=0.2): Returns a basic convolutional module with instance normalization and LeakyReLU activation.

forward(x):
    Performs forward pass through the module.
batch_size = 2
channels = 1
height = 2
width = 2
device = 'cuda'

x = torch.randn(batch_size, channels, height, width).to(device)
y = PointwiseConvs(in_features=channels, out_features=channels, feats=32, device=device)(x)

assert y.size() == x.size()

Spatial Convs


source

SpatialConvs

 SpatialConvs (in_features=1, out_features=1, feats=32, receptive_field=9,
               device='cpu', name='spatial_convs')

Spatial convolutional module for neural networks.

This module consists of a series of spatial convolutions with ReLU activation functions.

Attributes: name (str): Name of the module. device (str): Device to run computations on. receptive_field (int): Size of the receptive field for spatial convolutions. body (nn.Sequential): Sequential module containing the layers.

Methods: _get_basic_module(in_ch, out_ch, k_size=1, stride=1, padding=1, negative_slope=0.2): Returns a basic convolutional module with instance normalization and LeakyReLU activation.

forward(x):
    Performs forward pass through the module.
batch_size = 2
channels = 1
height = 2
width = 2
device = 'cuda'

x = torch.randn(batch_size, channels, height, width).to(device)
y = SpatialConvs(in_features=channels, out_features=channels, feats=32, device=device)(x)

assert y.size() == x.size()

Normalizing Flows

Dequantization

Uniform Dequantization


source

UniformDequantization

 UniformDequantization (alpha=1e-05, num_bits=8, device='cpu',
                        name='uniform_dequantization')

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

a = torch.randint(256,[4, 4])
b, _ = UniformDequantization()._forward_and_log_det_jacobian(a)
print(a)
print(b)
tensor([[104, 161,  55, 237],
        [118, 240, 207, 226],
        [176, 207, 213, 247],
        [241,  86, 108, 181]])
tensor([[0.4095, 0.6310, 0.2175, 0.9286],
        [0.4627, 0.9413, 0.8123, 0.8858],
        [0.6902, 0.8096, 0.8325, 0.9685],
        [0.9422, 0.3367, 0.4233, 0.7083]])
c = UniformDequantization()._inverse(b)
print(c)
tensor([[104., 161.,  55., 237.],
        [118., 240., 207., 226.],
        [176., 207., 213., 247.],
        [241.,  86., 108., 181.]])

Variational Dequantization (TO DO)

Conditional Linear


source

ConditionalLinear

 ConditionalLinear (device='cpu', name='linear_transformation',
                    codes={'code': [1, 2, 3]})

Conditional linear transformation module.

Applies different scales and biases based on average pixel size and camera values provided in the input. Supports both forward and inverse transformations.

Attributes: name (str): Name of the transformation. setup_code (torch.Tensor): Predefined set of pixel sizes. exp_times (torch.Tensor): Predefined set of camera values. log_scale (torch.nn.Parameter): Learnable log-scale parameters. bias (torch.nn.Parameter): Learnable bias parameters.

Methods: _inverse(z, **kwargs): Performs the inverse transformation based on the input ‘z’ and conditionals.

_forward_and_log_det_jacobian(x, **kwargs):
    Performs the forward transformation and computes the log determinant of the Jacobian.
batch_size = 2
channels = 1
height = 2
width = 2
device = 'cuda'

codes = {
        'exposure-time': torch.tensor([10, 50, 100], dtype=torch.float32, device=device),
        'optical-setup': torch.tensor([0, 1], dtype=torch.float32).to(device),
        # 'camera': torch.tensor([0, 1], dtype=torch.float32).to(device)
    }

x = torch.randn(batch_size, channels, height, width).to(device)
setup_idx = torch.tensor([1] * batch_size, dtype=torch.float32).to(device)
time_idx = torch.tensor([10] * batch_size, dtype=torch.float32).to(device)

kwargs = {'optical-setup': setup_idx, 'exposure-time': time_idx}

print(ComputeIndex(codes)(batch_size, **kwargs))

# Forward transformation
z, log_det_jacobian = ConditionalLinear(device=device, codes=codes)._forward_and_log_det_jacobian(x, **kwargs)
assert z.shape == x.shape
assert log_det_jacobian.shape == torch.Size([batch_size])

# Inverse transformation
x_reconstructed = ConditionalLinear(device=device, codes=codes)._inverse(z, **kwargs)
assert x_reconstructed.shape == x.shape

# Check if the reconstructed input is close to the original input
assert torch.allclose(x, x_reconstructed, atol=1e-5)
tensor([1., 1.], device='cuda:0')

Conditional Linear \(e^2\)


source

ConditionalLinearExp2

 ConditionalLinearExp2 (in_ch=1, device='cpu',
                        name='linear_transformation_exp2', codes={'code':
                        [1, 2, 3]})

Conditional linear transformation layer for flows, conditioned on specific ISO levels and setup codes.

This module applies a linear transformation to the input tensor, where the transformation parameters (log scale and bias) are conditioned based on the pixel size and setup code provided as input. The module supports both forward and inverse transformations.

Attributes: name (str): Name of the module. device (str): Device to run computations on. pixel_size (tensor): pixel size used for conditioning. cam_vals (tensor): Predefined setup codes used for conditioning. log_scale (nn.Parameter): Learnable log scale parameters for the transformation. bias (nn.Parameter): Learnable bias parameters for the transformation.

Methods: _inverse(z, **kwargs): Applies the inverse transformation to the input tensor z.

_forward_and_log_det_jacobian(x, **kwargs):
    Applies the forward transformation to the input tensor x and computes the log determinant
    of the Jacobian of the transformation.
batch_size = 2
channels = 1
height = 2
width = 2
device = 'cuda'

codes = {
        'exposure-time': torch.tensor([10, 50, 100], dtype=torch.float32, device=device),
        'optical-setup': torch.tensor([0, 1], dtype=torch.float32).to(device),
        # 'camera': torch.tensor([0, 1], dtype=torch.float32).to(device)
    }

x = torch.randn(batch_size, channels, height, width).to(device)

kwargs = {
        'exposure-time': torch.tensor([50], dtype=torch.float32).to(device),
        'optical-setup': torch.tensor([0], dtype=torch.float32).to(device)
    }

 # Forward transformation
z, log_det_jacobian = ConditionalLinearExp2(device=device, in_ch=x.shape[1], codes=codes)._forward_and_log_det_jacobian(x, **kwargs)
assert z.shape == x.shape
assert log_det_jacobian.shape == torch.Size([batch_size])

# Inverse transformation
x_reconstructed = ConditionalLinearExp2(device=device, in_ch=x.shape[1], codes=codes)._inverse(z, **kwargs)
assert x_reconstructed.shape == x.shape

# Check if the reconstructed input is close to the original input
assert torch.allclose(x, x_reconstructed, atol=1e-5)

Signal Dependent Conditional Linear


source

SignalDependentConditionalLinear

 SignalDependentConditionalLinear (meta_encoder, scale_and_bias, in_ch=1,
                                   device='cpu', name='signal_dependent_co
                                   ndition_linear', codes={'code': [1, 2,
                                   3]}, encode_ch=3)

Signal-dependent conditional linear transformation layer for flows.

This module applies a linear transformation to the input tensor, where the transformation parameters (log scale and bias) are conditioned on ISO levels and smartphone codes provided as input features. The conditioning is performed using embeddings generated from meta encoders and scale-and-bias modules.

Attributes: name (str): Name of the module. device (str): Device to run computations on. in_ch (int): Number of input channels. setup_codes (tensor): Predefined ISO levels used for conditioning. exp_times (tensor): Predefined smartphone codes used for conditioning. encode_ch (int): Number of channels in the embeddings generated by the meta encoder. meta_encoder (nn.Module): Meta encoder module to generate embeddings from ISO and camera inputs. scale_and_bias (nn.Module): Module to compute scale and bias parameters based on embeddings and input features.

Methods: _get_embeddings(x, **kwargs): Generates embeddings from ISO-level and smartphone-code inputs and concatenates them with additional features.

_inverse(z, **kwargs):
    Applies the inverse transformation to the input tensor z.

_forward_and_log_det_jacobian(x, **kwargs):
    Applies the forward transformation to the input tensor x and computes the log determinant of the Jacobian.
from Noise2Model.networks import ResidualNet
device = 'cuda'

x = torch.randn(batch_size, channels, height, width).to(device)

kwargs = {
        'exposure-time': torch.tensor([50], dtype=torch.float32).to(device),
        'optical-setup': torch.tensor([0], dtype=torch.float32).to(device),
        'clean': x,
    }
codes = {
        'exposure-time': torch.tensor([10, 50, 100], dtype=torch.float32, device=device),
        'optical-setup': torch.tensor([0, 1], dtype=torch.float32).to(device),
        # 'camera': torch.tensor([0, 1], dtype=torch.float32).to(device)
    }

layer = SignalDependentConditionalLinear(lambda feats_in, feats_out: ResidualNet(in_features=feats_in,
                        out_features=feats_out,
                        hidden_features=1,
                        num_blocks=3,
                        use_batch_norm=True,
                        dropout_probability=0.0).to(device), lambda feats_in, feats_out: PointwiseConvs(in_features=feats_in,
                        out_features=feats_out, device=device,
                        feats=1), device=device, in_ch=x.shape[1], codes=codes)

z, log_abs_det_J_inv = layer._forward_and_log_det_jacobian(x, **kwargs)

assert z.device == x.device

Structure-Aware Conditional Linear Layer


source

StructureAwareConditionalLinearLayer

 StructureAwareConditionalLinearLayer (meta_encoder, structure_encoder,
                                       in_ch=1, device='cpu', name='struct
                                       ure_aware_condition_linear',
                                       codes={'code': [1, 2, 3]})

Structure-aware conditional linear transformation layer for flows.

This module applies a linear transformation to the input tensor, where the transformation parameters (log scale and bias) are conditioned on ISO levels and smartphone codes provided as input features. The conditioning involves both meta encoding and structure encoding of input features.

Attributes: in_ch (int): Number of input channels. iso_vals (tensor): Predefined ISO levels used for conditioning. cam_vals (tensor): Predefined smartphone codes used for conditioning. meta_encoder (nn.Module): Meta encoder module to generate embeddings from ISO and camera inputs. structure_encoder (nn.Module): Structure encoder module to generate embeddings from input features.

Methods: _get_embeddings(x, **kwargs): Generates embeddings from ISO-level and smartphone-code inputs and combines them using structure encoding.

_inverse(z, **kwargs):
    Applies the inverse transformation to the input tensor z.

_forward_and_log_det_jacobian(x, **kwargs):
    Applies the forward transformation to the input tensor x and computes the log determinant of the Jacobian.
from Noise2Model.networks import ResidualNet
device = 'cuda'

x = torch.randn(batch_size, channels, height, width).to(device)

kwargs = {
        'exposure-time': torch.tensor([50], dtype=torch.float32).to(device),
        'optical-setup': torch.tensor([0], dtype=torch.float32).to(device),
        'clean': x,
    }
codes = {
        'exposure-time': torch.tensor([10, 50, 100], dtype=torch.float32, device=device),
        'optical-setup': torch.tensor([0, 1], dtype=torch.float32).to(device),
        # 'camera': torch.tensor([0, 1], dtype=torch.float32).to(device)
    }

layer = StructureAwareConditionalLinearLayer(lambda feats_in, feats_out: ResidualNet(in_features=feats_in,
                        out_features=feats_out,
                        hidden_features=1,
                        num_blocks=3,
                        use_batch_norm=True,
                        dropout_probability=0.0).to(device), lambda feats_in, feats_out: SpatialConvs(in_features=feats_in,
                        out_features=feats_out, device=device,
                        feats=1), device=device, in_ch=x.shape[1], codes=codes)

z, log_abs_det_J_inv = layer._forward_and_log_det_jacobian(x, **kwargs)

assert z.device == x.device

Noise Extraction


source

NoiseExtraction

 NoiseExtraction (device='cpu', name='noise_extraction')

Module for noise extraction in neural networks.

This module extracts noise by adding or subtracting the clean signal from the input.

Attributes: name (str): Name of the module. device (str): Device to run computations on.

Methods: _inverse(z, **kwargs): Computes the inverse operation by adding the clean signal to z.

_forward_and_log_det_jacobian(x, **kwargs):
    Computes forward operation by subtracting the clean signal from x and returns a zero log determinant Jacobian.
device = 'cuda'

x = torch.rand(batch_size, channels, height, width).to(device)
print('x:', x)

kwargs = {
        'exposure-time': torch.tensor([50], dtype=torch.float32).to(device),
        'optical-setup': torch.tensor([0], dtype=torch.float32).to(device),
        'clean': torch.rand(batch_size, channels, height, width).to(device),
    }

print('clean:', kwargs['clean'])

layer = NoiseExtraction(device=device)

z, _ = layer._forward_and_log_det_jacobian(x, **kwargs)
print('\n\n z:', z)

assert z.device == x.device
x: tensor([[[[0.9044, 0.5942],
          [0.6439, 0.8483]]],


        [[[0.7072, 0.7334],
          [0.7445, 0.8308]]]], device='cuda:0')
clean: tensor([[[[0.3914, 0.9923],
          [0.3723, 0.7225]]],


        [[[0.9459, 0.6627],
          [0.8506, 0.3440]]]], device='cuda:0')


 z: tensor([[[[ 0.5130, -0.3981],
          [ 0.2717,  0.1258]]],


        [[[-0.2387,  0.0707],
          [-0.1061,  0.4868]]]], device='cuda:0')

Noise Flow Layers

# channels = 1
# hidden_channels = 16

# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# x = torch_randn(1, channels, 16, 16).to(device)
# print(x.device)

# # tst =  AffineSdn(x.shape[1:]).to(device)
# tst = Unconditional(channels=x.shape[1],hidden_channels = 16,split_mode='channel' if x.shape[1] != 1 else 'checkerboard').to(device)
# # tst = Gain(x.shape[1:]).to(device)  
# print(tst)
# kwargs = {}; kwargs['clean'] = x
# y, _ = tst(x,**kwargs)
# test_eq(y.shape, x.shape)