iris.nodes.normalization package

Submodules

iris.nodes.normalization.common module

iris.nodes.normalization.common.correct_orientation(pupil_points: ndarray, iris_points: ndarray, eye_orientation: float) Tuple[ndarray, ndarray][source]

Correct orientation by changing the starting angle in pupil and iris points’ arrays.

Parameters:
  • pupil_points (np.ndarray) – Pupil boundary points’ array. NumPy array of shape (num_points = 360, xy_coords = 2).

  • iris_points (np.ndarray) – Iris boundary points’ array. NumPy array of shape (num_points = 360, xy_coords = 2).

  • eye_orientation (float) – Eye orientation angle in radians.

Returns:

Tuple with rotated based on eye_orientation angle boundary points (pupil_points, iris_points).

Return type:

Tuple[np.ndarray, np.ndarray]

iris.nodes.normalization.common.generate_iris_mask(extrapolated_contours: GeometryPolygons, noise_mask: ndarray) ndarray[source]

Generate iris mask by first finding the intersection region between extrapolated iris contours and eyeball contours. Then remove from the outputted mask those pixels for which noise_mask is equal to True.

Parameters:
  • extrapolated_contours (GeometryPolygons) – Iris polygon vertices.

  • noise_mask (np.ndarray) – Noise mask.

Returns:

Iris mask.

Return type:

np.ndarray

iris.nodes.normalization.common.get_pixel_or_default(image: ndarray, pixel_x: float, pixel_y: float, default: bool | int) bool | int[source]

Get the value of a pixel in the image 2D array.

Parameters:
  • image (np.ndarray) – 2D Array.

  • pixel_x (float) – Pixel x coordinate.

  • pixel_y (float) – Pixel y coordinate.

  • default (Union[bool, int]) – Default value to return when (pixel_x, pixel_y) is out-of-bounds

Returns:

Pixel value.

Return type:

Union[bool, int]

iris.nodes.normalization.common.getgrids(res_in_r: NonNegativeInt, p2i_ratio: NonNegativeInt) ndarray[source]

Generate radius grids for nonlinear normalization based on p2i_ratio (pupil_to_iris ratio).

Parameters:
  • res_in_r (NonNegativeInt) – Normalized image r resolution.

  • p2i_ratio (NonNegativeInt) – pupil_to_iris ratio, range in [0,100]

Returns:

nonlinear sampling grids for normalization

Return type:

np.ndarray

iris.nodes.normalization.common.interpolate_pixel_intensity(image: ndarray, pixel_coords: Tuple[float, float]) float[source]

Perform bilinear interpolation to estimate pixel intensity in a given location.

Parameters:
  • image (np.ndarray) – Original, not normalized image.

  • pixel_coords (Tuple[float, float]) – Pixel coordinates.

Returns:

Interpolated pixel intensity.

Return type:

float

Reference:

[1] https://en.wikipedia.org/wiki/Bilinear_interpolation

iris.nodes.normalization.common.normalize_all(image: ndarray, iris_mask: ndarray, src_points: ndarray) Tuple[ndarray, ndarray][source]

Normalize all points of an image using nearest neighbor.

Parameters:
  • image (np.ndarray) – Original, not normalized image.

  • iris_mask (np.ndarray) – Iris class segmentation mask.

  • src_points (np.ndarray) – original input image points.

Returns:

Tuple with normalized image and mask.

Return type:

Tuple[np.ndarray, np.ndarray]

iris.nodes.normalization.common.to_uint8(image: ndarray) ndarray[source]

Map normalized image values from [0, 1] range to [0, 255] and cast dtype to np.uint8.

Parameters:

image (np.ndarray) – Normalized iris.

Returns:

Normalized iris with modified values.

Return type:

np.ndarray

iris.nodes.normalization.linear_normalization module

class iris.nodes.normalization.linear_normalization.LinearNormalization(res_in_r: PositiveInt = 128, oversat_threshold: PositiveInt = 254)[source]

Bases: Algorithm

Implementation of a normalization algorithm which uses linear transformation to map image pixels.

Algorithm steps:
  1. Create linear grids of sampling radii based on parameters: res_in_r (height) and the number of extrapolated iris and pupil points from extrapolated_contours (width).

  2. Compute the mapping between the normalized image pixel location and the original image location.

  3. Obtain pixel values of normalized image using Nearest Neighbor interpolation.

class Parameters(*, res_in_r: PositiveInt, oversat_threshold: PositiveInt)[source]

Bases: Parameters

Parameters class for LinearNormalization.

oversat_threshold: PositiveInt
res_in_r: PositiveInt
run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]

Normalize iris using linear transformation when sampling points from cartisian to polar coordinates.

Parameters:
Returns:

NormalizedIris object containing normalized image and iris mask.

Return type:

NormalizedIris

iris.nodes.normalization.nonlinear_normalization module

class iris.nodes.normalization.nonlinear_normalization.NonlinearNormalization(res_in_r: PositiveInt = 128, oversat_threshold: PositiveInt = 254)[source]

Bases: Algorithm

Implementation of a normalization algorithm which uses nonlinear squared transformation to map image pixels.

Algorithm steps:
  1. Create nonlinear grids of sampling radii based on parameters: res_in_r, intermediate_radiuses.

  2. Compute the mapping between the normalized image pixel location and the original image location.

  3. Obtain pixel values of normalized image using bilinear intepolation.

class Parameters(*, res_in_r: PositiveInt, intermediate_radiuses: Collection[float], oversat_threshold: PositiveInt)[source]

Bases: Parameters

Parameters class for NonlinearNormalization.

intermediate_radiuses: Collection[float]
oversat_threshold: PositiveInt
res_in_r: PositiveInt
run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]

Normalize iris using nonlinear transformation when sampling points from cartisian to polar coordinates.

Parameters:
Returns:

NormalizedIris object containing normalized image and iris mask.

Return type:

NormalizedIris

iris.nodes.normalization.perspective_normalization module

class iris.nodes.normalization.perspective_normalization.PerspectiveNormalization(res_in_phi: int = 1024, res_in_r: int = 128, skip_boundary_points: int = 10, intermediate_radiuses: Collection[float] = array([0., 0.11111111, 0.22222222, 0.33333333, 0.44444444, 0.55555556, 0.66666667, 0.77777778, 0.88888889, 1.]), oversat_threshold: int = 254)[source]

Bases: Algorithm

Implementation of a normalization algorithm which uses perspective transformation to map image pixels.

Algorithm steps:
  1. Create a grid of trapezoids around iris in original image based on following algorithm parameters: res_in_phi, res_in_r, intermediate_radiuses.

  2. Create a grid of corresponding to each trapezoid rectangles in normalized image.

  3. For each corresponding trapezoid, rectangle pair compute perspective matrix to estimate normalized image pixel location in an original image location.

  4. Map each normalized image pixel to original image pixel based on estimated perspective matrix and perform bilinear interpolation if necessary.

class Parameters(*, res_in_phi: ConstrainedIntValue, res_in_r: ConstrainedIntValue, skip_boundary_points: ConstrainedIntValue, intermediate_radiuses: Collection[float], oversat_threshold: ConstrainedIntValue)[source]

Bases: Parameters

Parameters class for PerspectiveNormalization.

classmethod check_intermediate_radiuses(v: Collection[float]) Collection[float][source]

Check intermediate_radiuses parameter.

Parameters:
  • cls (type) – PerspectiveNormalization.Parameters class.

  • v (Collection[float]) – Variable value to check.

Raises:

NormalizationError – Raised if number of radiuses is invalid or min value is less then 0.0 or greater than 1.0.

Returns:

intermediate_radiuses value passed for further processing.

Return type:

Collection[float]

intermediate_radiuses: Collection[float]
oversat_threshold: int
res_in_phi: int
res_in_r: int
skip_boundary_points: int
static cartesian2homogeneous(points: List[ndarray]) ndarray[source]

Convert points in cartesian coordinates to homogeneous coordinates.

Parameters:

points (List[np.ndarray]) – Points in cartesian coordinates. Array should be in format: [[x values], [y values]].

Returns:

Points in homogeneous coordinates. Returned array will have format: [[x values], [y values], [1 … 1]].

Return type:

np.ndarray

static homogeneous2cartesian(points: ndarray) ndarray[source]

Convert points in homogeneous coordinates to cartesian coordinates.

Parameters:

points (np.ndarray) – Points in homogeneous coordinates. Array should be in format: [[x values], [y values], [perspective scale values]].

Returns:

Points in cartesian coordinates. Returned array will have format: [[x values], [y values]].

Return type:

np.ndarray

run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]

Normalize iris using perspective transformation estimated for every region of an image separately.

Parameters:
Returns:

NormalizedIris object containing normalized image and iris mask.

Return type:

NormalizedIris

Raises:

NormalizationError – Raised if amount of iris and pupil points is different.

Module contents