Neurotechnology company logo
Menu button

Technical information and specifications

Neurotechnology Face Verification system provides advanced capabilities for facial recognition applications, including high-level API for all operations and face liveness check. There are also certain requirements for facial image and posture.

Face verification operation can be performed both on client side through the SDK component and on server side through the Web Service component.

SDK component specifications

The Face Verification system architecture requires to account the performed operations on integrator's or end-user's server:

  • Integrators should ensure that encrypted connection is used for communications with the server.
  • No biometric information is sent to the server during all operations performed using the SDK API, which require communication with the server. The biometric data is kept on the client-side, only transaction accounting information is sent to and received from the server.

The following operations are available via the high-level API of the SDK:

  • Face template creation – a face is captured from camera and the face template is extracted for further usage in the face verification operation.
    • The server returns proprietary encrypted data as a result of an enrolment transaction that has been completed successfully.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
    • Age estimation, glasses and hat detection can be optionally enabled for certain usage scenarios. In such cases Face Verification may generate a warning that can be used to exclude from the onboarding process the pictures that do not conform to the expected face quality requirements set in the application.
    • The facial template can be converted to QR code image, which can be used for further verificatoin.
    • A token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • The template may be saved to any storage (database, file etc) together with custom metainformation (like person's name etc.). Note that the storage functionality is not part of the Face Verification system, although the programming samples include an example of such implementation).
  • Face verification – a face is captured from the camera, or alternatively a face template captured with VeriLook SDK is verified against the face template which was created during the face template creation operation or obtained from a QR code.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
  • Template import – a face template, which was created with VeriLook algorithm, or a face image can be imported into the application, based on Neurotechnology Face Verification system. Later this template can be used for face verification operation in the same way, as the native templates from the face template creation operation.
  • Liveness check – this operation perform only liveness check of the provided face and only returns the result of the check. See the recommendations for the liveness check below on this page.
    • If the liveness check succeed, a token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • ICAO compliance check can be optionally used to strengthen the liveness check.

Web Service Component specifications

The Face Verification system architecture requires to account the performed operations on the server-side.

The following operations are available via the high-level API of the component:

  • Face template creation – a face is captured through a web stream and the face template is extracted for further usage in the face verification operation.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
    • Age estimation, glasses and hat detection can be optionally enabled for certain usage scenarios. In such cases Face Verification may generate a warning that can be used to exclude from the onboarding process the pictures that do not conform to the expected face quality requirements set in the application.
    • The facial template can be converted to QR code image, which can be used for further verificatoin.
    • A token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • The template is saved to server together with custom metainformation (like person's name etc.).
  • Face verification – a face captured through a web stream or alternatively a face template captured with VeriLook SDK is verified against the face template which was created during the face template creation operation or obtained from a QR code.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
  • Template import – a face template, which was created with VeriLook algorithm, or a face image can be imported into the application, based on Neurotechnology Face Verification system. Later this template can be used for face verification operation in the same way, as the native templates from the face template creation operation.
  • Liveness check – this operation perform only liveness check of the provided face and only returns the result of the check. See the recommendations for the liveness check below on this page.
    • If the liveness check succeed, a token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated and stored on the server.
    • ICAO compliance check can be optionally used to strengthen the liveness check.

Basic Recommendations for facial image and posture

The face recognition accuracy heavily depends on the quality of a face image. Image quality during enrollment is important, as it influences the quality of the face template.

  • 32 pixels is the recommended minimal distance between eyes (IOD) for a face on a video stream to perform face template extraction reliably. 64 pixels or more recommended for better face recognition results. Note that this distance should be native, not achieved by resizing the video frames.
  • Several face enrollments are recommended for better facial template quality which results in improvement of recognition quality and reliability.
  • Additional enrollments may be needed when facial hair style changes, especially when beard or mustache is grown or shaved off.
  • The face recognition engine is intended for usage with near-frontal face images and has certain tolerance to face posture:
    • head roll (tilt) – ±15 degrees;
    • head pitch (nod) – ±15 degrees from frontal position.
    • head yaw (bobble) – ±15 degrees from frontal position.

Face Liveness Detection

Certified algorithm for
face liveness check
iBeta badge
Conformance letter from iBeta

The face liveness check algorithm was tested by iBeta and proven to be compliant with ISO 30107-3 Biometric Presentation Attack Detection Standards.

A live video stream from a camera is required for face liveness check:

  • When the liveness check is enabled, it is performed by the face engine before feature extraction. If the face in the stream fails to qualify as "live", the features are not extracted.
  • Only one face should be visible in these frames.
  • At least 1280 x 720 pixels video stream resolution is required for performing face liveness check in compliance with ISO 30107-3 Biometric Presentation Attack Detection Standards. Lower resolution video streams can be used if such compliance is not requred.
  • 80 pixels is the recommended minimal distance between eyes (IOD) for a face to perform liveness check reliably. 100 pixels or more recommended for smoother performance.
  • During passive liveness check the face should be still and the user has to look directly at the camera with ±15 degrees tolerances for roll, pitch and yaw to experience the best performance.
  • Optionally, ICAO compliance check can be used to strengthen the liveness check.
  • Users can enable these liveness check modes:
    • Active – the engine requests the user to perform certain actions like blinking or moving one's head.
      • 5 frames per second or better frame rate required.
      • This mode can work with both colored and grayscale images.
      • This mode requires the user to perform all requested actions to pass the liveness check.
    • Passive – the engine analyzes certain facial features while the user stays still in front of the camera for a short period of time.
      • Colored images are required for this mode.
      • 10 frames per second or better frame rate required.
      • Better score is achieved when users do not move at all.
    • Passive + Blink – the engine analyzes certain facial features while the user stays still in front of the camera for a short period of time, when the engine requests the user to blink.
      • Colored images are required for this mode.
      • 10 frames per second or higher frame rate required.
    • Passive then active – the engine first tries the passive liveness check, and if it fails, tries the active check. This mode requires colored images.
    • Simple – the engine requires user to turn head from side to side while looking at camera.
      • 5 frames per second or better frame rate recommended.
      • This mode can work with both colored and grayscale images.
    • Custom – the engine requires user to turn head in four directions (up, down, left, right), in a random order.
      • 5 frames per second or better frame rate required.
      • This mode can work with both colored and grayscale images.
      • This mode requires the user to perform all requested actions to pass the liveness check.
Facebook icon   LinkedIn icon   Twitter icon   Youtube icon   Email newsletter icon
Copyright © 1998 - 2024 Neurotechnology