Pre-requisites:
Firebase ML KIT simplifies machine learning by offering pre-trained models usable in iOS and Android apps. Let's utilize ML Kit's Face Detection API to identify faces in photos. By the end of this guide, we will have an app capable of recognizing faces in images and displaying related information like smiles or closed eyes with a user-friendly interface.

Step 1: Create a New Project
Step 2: Connect with ML KIT on Firebase
Step 3: Custom Assets and Gradle

implementation 'com.google.firebase:firebase-ml-vision:17.0.0'
apply plugin: 'com.google.gms.google-services'


![]() |
<?xml version="1.0" encoding="UTF-8"?> <androidx.constraintlayout.widget.ConstraintLayout android:layout_height="match_parent" android:layout_width="match_parent" xmlns:tools="http://schemas.android.com/tools" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android"> <ScrollView android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"> <RelativeLayout android:id="@ id/relativeLayout" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginStart="20dp" android:layout_marginEnd="20dp" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"> <TextView android:id="@ id/result_text_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center" android:text="LCOFaceDetection" android:textColor="#000000" android:textSize="18sp" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"/> <Button android:id="@ id/result_ok_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/result_text_view" android:layout_centerInParent="true" android:layout_marginTop="20dp" android:layout_marginBottom="5dp" android:background="#75DA8B" android:padding="16dp" android:text="ok" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@ id/result_text_view"/> </RelativeLayout> </ScrollView> </androidx.constraintlayout.widget.ConstraintLayout> ScrollViewapp:layout_constraintTop_toTopOfRelativeLayout"@ id/relativeLayout"android:layout_marginStart"20dp"android:layout_marginEnd<!--text view to display the result textafter reading an image-->TextView"@ id/result_text_view"android:gravity"center""LCOFaceDetection"android:textColorandroid:textSize"18sp"<!--a button with text 'ok' written on it-->"@ id/result_ok_button"android:layout_below"@id/result_text_view"android:layout_centerInParent"true"android:layout_marginTop"5dp""#75DA8B""ok"app:layout_constraintTop_toBottomOfimport android.app.Application;import com.google.firebase.FirebaseApp;public class LCOFaceDetection extends Application { public final static String RESULT_TEXT = "RESULT_TEXT"; public final static String RESULT_DIALOG = "RESULT_DIALOG"; // Initializing our Firebase @Override public void onCreate() { super.onCreate(); FirebaseApp.initializeApp(this); }}import android.os.Bundle;import android.view.LayoutInflater;import android.view.View;import android.view.ViewGroup;import android.widget.Button;import android.widget.TextView;import androidx.annotation.NonNull;import androidx.annotation.Nullable;import androidx.fragment.app.DialogFragment;public class ResultDialog extends DialogFragment {Button okBtn;TextView resultTextView;@Nullablepublic View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {View view = inflater.inflate(R.layout.fragment_resultdialog, container, false);String resultText = "";okBtn = view.findViewById(R.id.result_ok_button);resultTextView = view.findViewById(R.id.result_text_view);Bundle bundle = getArguments();resultText = bundle.getString(LCOFaceDetection.RESULT_TEXT);resultTextView.setText(resultText);okBtn.setOnClickListener(new View.OnClickListener() {public void onClick(View v) {dismiss();}});return view;}}| Setting | Description |
|---|---|
| Performance mode | Choose between FAST (default) or ACCURATE to prioritize speed or accuracy in face detection. |
| Detect landmarks | Determine whether to identify facial landmarks like eyes, ears, nose, and more. Options include NO_LANDMARKS (default) or ALL_LANDMARKS. |
| Detect contours | Indicate if contours of facial features should be detected, limited to the most prominent face in the image. Choices are NO_CONTOURS (default) or ALL_CONTOURS. |
| Classify faces | Decide whether to categorize faces, such as identifying expressions like "smiling" or "eyes open." Choose between NO_CLASSIFICATIONS (default) or ALL_CLASSIFICATIONS. |
| Minimum face size | Set the minimum size of faces to detect, relative to the image. Default value is 0.1f. |
Enable face tracking to assign IDs to faces for tracking across multiple images.
When contour detection is active, only one face is detected, rendering face tracking ineffective.
Avoid enabling both contour detection and face tracking for improved detection accuracy and speed.
Utilize Firebase ML Vision for face detection in Android applications.
Customize face detection model settings using FirebaseVisionFaceDetectorOptions.
Process captured images to identify facial features like smiles and eye openness.
Display detection results in a dialog box, including attributes of recognized faces.
Integrate Firebase ML Vision functionalities into Android apps for advanced image processing.
Below is a summarized version of the code implementation:
Import necessary libraries and classes for ML Vision and Android functionalities.
Initialize Firebase in the main activity for ML Vision setup.
Implement camera functionality to capture images for face detection.
Configure face detection model options for accurate detection.
Process captured images to detect faces and extract facial attributes.
Handle success and failure scenarios of face detection operations.
| /*package whatever do not write package name here*/ | androidx.appcompat.app.AppCompatActivity; | android.content.Intent; | android.graphics.Bitmap; |
| android.provider.MediaStore; | android.widget.Toast; | com.google.android.gms.tasks.OnFailureListener; | com.google.android.gms.tasks.OnSuccessListener; |
| com.google.firebase.ml.vision.FirebaseVision; | com.google.firebase.ml.vision.common.FirebaseVisionImage; | com.google.firebase.ml.vision.common.FirebaseVisionPoint; | com.google.firebase.ml.vision.face.FirebaseVisionFace; |
| com.google.firebase.ml.vision.face.FirebaseVisionFaceDetector; | com.google.firebase.ml.vision.face.FirebaseVisionFaceDetectorOptions; | com.google.firebase.ml.vision.face.FirebaseVisionFaceLandmark; | java.util.List; |