iris.nodes.normalization package¶
Submodules¶
iris.nodes.normalization.linear_normalization module¶
- class iris.nodes.normalization.linear_normalization.LinearNormalization(res_in_r: PositiveInt = 128, oversat_threshold: PositiveInt = 254)[source]¶
- Bases: - Algorithm- Implementation of a normalization algorithm which uses linear transformation to map image pixels. - Algorithm steps:
- Create linear grids of sampling radii based on parameters: res_in_r (height) and the number of extrapolated iris and pupil points from extrapolated_contours (width). 
- Compute the mapping between the normalized image pixel location and the original image location. 
- Obtain pixel values of normalized image using Nearest Neighbor interpolation. 
 
 - class Parameters(*, res_in_r: PositiveInt, oversat_threshold: PositiveInt)[source]¶
- Bases: - Parameters- Parameters class for LinearNormalization. - oversat_threshold: PositiveInt¶
 - res_in_r: PositiveInt¶
 
 - run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]¶
- Normalize iris using linear transformation when sampling points from cartisian to polar coordinates. - Parameters:
- image (IRImage) – Input image to normalize. 
- noise_mask (NoiseMask) – Noise mask. 
- extrapolated_contours (GeometryPolygons) – Extrapolated contours. 
- eye_orientation (EyeOrientation) – Eye orientation angle. 
 
- Returns:
- NormalizedIris object containing normalized image and iris mask. 
- Return type:
 
 
iris.nodes.normalization.nonlinear_normalization module¶
- class iris.nodes.normalization.nonlinear_normalization.NonlinearNormalization(res_in_r: PositiveInt = 128, oversat_threshold: PositiveInt = 254, method: NonlinearType = NonlinearType.default)[source]¶
- Bases: - Algorithm- Implementation of a normalization algorithm which uses nonlinear squared transformation to map image pixels. - Algorithm steps:
- Create nonlinear grids of sampling radii based on parameters: res_in_r, intermediate_radiuses. 
- Compute the mapping between the normalized image pixel location and the original image location. 
- Obtain pixel values of normalized image using bilinear intepolation. 
 
 - References - [1] H J Wyatt, A ‘minimum-wear-and-tear’ meshwork for the iris, https://core.ac.uk/download/pdf/82071136.pdf [2] W-S Chen, J-C Li, Fast Non-linear Normalization Algorithm for Iris Recognition, https://www.scitepress.org/papers/2010/28409/28409.pdf - class Parameters(*, res_in_r: PositiveInt, intermediate_radiuses: Collection[float], oversat_threshold: PositiveInt)[source]¶
- Bases: - Parameters- Parameters class for NonlinearNormalization. - intermediate_radiuses: Collection[float]¶
 - oversat_threshold: PositiveInt¶
 - res_in_r: PositiveInt¶
 
 - run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]¶
- Normalize iris using nonlinear transformation when sampling points from cartisian to polar coordinates. - Parameters:
- image (IRImage) – Input image to normalize. 
- noise_mask (NoiseMask) – Noise mask. 
- extrapolated_contours (GeometryPolygons) – Extrapolated contours. 
- eye_orientation (EyeOrientation) – Eye orientation angle. 
 
- Returns:
- NormalizedIris object containing normalized image and iris mask. 
- Return type:
 
 
iris.nodes.normalization.perspective_normalization module¶
- class iris.nodes.normalization.perspective_normalization.PerspectiveNormalization(res_in_phi: PositiveInt = 1024, res_in_r: PositiveInt = 128, skip_boundary_points: PositiveInt = 10, intermediate_radiuses: Collection[float] = array([0., 0.11111111, 0.22222222, 0.33333333, 0.44444444, 0.55555556, 0.66666667, 0.77777778, 0.88888889, 1.]), oversat_threshold: PositiveInt = 254)[source]¶
- Bases: - Algorithm- Implementation of a normalization algorithm which uses perspective transformation to map image pixels. - Algorithm steps:
- Create a grid of trapezoids around iris in original image based on following algorithm parameters: res_in_phi, res_in_r, intermediate_radiuses. 
- Create a grid of corresponding to each trapezoid rectangles in normalized image. 
- For each corresponding trapezoid, rectangle pair compute perspective matrix to estimate normalized image pixel location in an original image location. 
- Map each normalized image pixel to original image pixel based on estimated perspective matrix and perform bilinear interpolation if necessary. 
 
 - class Parameters(*, res_in_phi: PositiveInt, res_in_r: PositiveInt, skip_boundary_points: PositiveInt, intermediate_radiuses: Collection[float], oversat_threshold: PositiveInt)[source]¶
- Bases: - Parameters- Parameters class for PerspectiveNormalization. - classmethod check_intermediate_radiuses(v: Collection[float]) Collection[float][source]¶
- Check intermediate_radiuses parameter. - Parameters:
- cls (type) – PerspectiveNormalization.Parameters class. 
- v (Collection[float]) – Variable value to check. 
 
- Raises:
- NormalizationError – Raised if number of radiuses is invalid or min value is less then 0.0 or greater than 1.0. 
- Returns:
- intermediate_radiuses value passed for further processing. 
- Return type:
- Collection[float] 
 
 - intermediate_radiuses: Collection[float]¶
 - oversat_threshold: PositiveInt¶
 - res_in_phi: PositiveInt¶
 - res_in_r: PositiveInt¶
 - skip_boundary_points: PositiveInt¶
 
 - static cartesian2homogeneous(points: List[ndarray]) ndarray[source]¶
- Convert points in cartesian coordinates to homogeneous coordinates. - Parameters:
- points (List[np.ndarray]) – Points in cartesian coordinates. Array should be in format: [[x values], [y values]]. 
- Returns:
- Points in homogeneous coordinates. Returned array will have format: [[x values], [y values], [1 … 1]]. 
- Return type:
- np.ndarray 
 
 - static homogeneous2cartesian(points: ndarray) ndarray[source]¶
- Convert points in homogeneous coordinates to cartesian coordinates. - Parameters:
- points (np.ndarray) – Points in homogeneous coordinates. Array should be in format: [[x values], [y values], [perspective scale values]]. 
- Returns:
- Points in cartesian coordinates. Returned array will have format: [[x values], [y values]]. 
- Return type:
- np.ndarray 
 
 - run(image: IRImage, noise_mask: NoiseMask, extrapolated_contours: GeometryPolygons, eye_orientation: EyeOrientation) NormalizedIris[source]¶
- Normalize iris using perspective transformation estimated for every region of an image separately. - Parameters:
- image (IRImage) – Input image to normalize. 
- noise_mask (NoiseMask) – Noise mask. 
- extrapolated_contours (GeometryPolygons) – Extrapolated contours. 
- eye_orientation (EyeOrientation) – Eye orientation angle. 
 
- Returns:
- NormalizedIris object containing normalized image and iris mask. 
- Return type:
- Raises:
- NormalizationError – Raised if amount of iris and pupil points is different. 
 
 
iris.nodes.normalization.utils module¶
- iris.nodes.normalization.utils.correct_orientation(pupil_points: ndarray, iris_points: ndarray, eye_orientation: float) Tuple[ndarray, ndarray][source]¶
- Correct orientation by changing the starting angle in pupil and iris points’ arrays. - Parameters:
- pupil_points (np.ndarray) – Pupil boundary points’ array. NumPy array of shape (num_points = 360, xy_coords = 2). 
- iris_points (np.ndarray) – Iris boundary points’ array. NumPy array of shape (num_points = 360, xy_coords = 2). 
- eye_orientation (float) – Eye orientation angle in radians. 
 
- Returns:
- Tuple with rotated based on eye_orientation angle boundary points (pupil_points, iris_points). 
- Return type:
- Tuple[np.ndarray, np.ndarray] 
 
- iris.nodes.normalization.utils.generate_iris_mask(extrapolated_contours: GeometryPolygons, noise_mask: ndarray) ndarray[source]¶
- Generate iris mask by first finding the intersection region between extrapolated iris contours and eyeball contours. Then remove from the outputted mask those pixels for which noise_mask is equal to True. - Parameters:
- extrapolated_contours (GeometryPolygons) – Iris polygon vertices. 
- noise_mask (np.ndarray) – Noise mask. 
 
- Returns:
- Iris mask. 
- Return type:
- np.ndarray 
 
- iris.nodes.normalization.utils.get_pixel_or_default(image: ndarray, pixel_x: float, pixel_y: float, default: bool | int) bool | int[source]¶
- Get the value of a pixel in the image 2D array. - Parameters:
- image (np.ndarray) – 2D Array. 
- pixel_x (float) – Pixel x coordinate. 
- pixel_y (float) – Pixel y coordinate. 
- default (Union[bool, int]) – Default value to return when (pixel_x, pixel_y) is out-of-bounds 
 
- Returns:
- Pixel value. 
- Return type:
- Union[bool, int] 
 
- iris.nodes.normalization.utils.interpolate_pixel_intensity(image: ndarray, pixel_coords: Tuple[float, float]) float[source]¶
- Perform bilinear interpolation to estimate pixel intensity in a given location. - Parameters:
- image (np.ndarray) – Original, not normalized image. 
- pixel_coords (Tuple[float, float]) – Pixel coordinates. 
 
- Returns:
- Interpolated pixel intensity. 
- Return type:
- float 
 - Reference:
 
- iris.nodes.normalization.utils.normalize_all(image: ndarray, iris_mask: ndarray, src_points: ndarray) Tuple[ndarray, ndarray][source]¶
- Normalize all points of an image using nearest neighbor. - Parameters:
- image (np.ndarray) – Original, not normalized image. 
- iris_mask (np.ndarray) – Iris class segmentation mask. 
- src_points (np.ndarray) – original input image points. 
 
- Returns:
- Tuple with normalized image and mask. 
- Return type:
- Tuple[np.ndarray, np.ndarray]