top of page
  • Writer's pictureYash Agarwal

Face recognition based authentication in Power Apps using Azure Cognitive Services

In this #PowerShot I will show you how to create a face recognition based authentication system in Power Apps using Power Automate and Azure Cognitive Services. We will utilize a canvas app in Power Apps, Power Automate and Face API from Azure Cognitive Services for this setup.


Let's Get Started!


We will look at a Canvas app with two screens and implement the face recognition authentication for the second screen. When a user clicks on a button to navigate to the second screen, the camera control appears so that the user can take a picture for authentication. Once the picture is clicked, it triggers a flow to identify the person. If the face recognition step fails, a button control appears that the user can click to trigger a flow that allows user to authenticate with an email.


Azure Cognitive Services (Face API)


Step 1: Create a "Face" resource in Azure.


Step 2: Create a connection to the Face API connector in Power Automate and pass the Key while creating the connection.


For more details on the Face API resource, follow the documentation here.

 

Power Automate


Flow 1: Create a Person and train the model: In this flow, we will take a look at how we can create a person and a face to that person using the Face API. We will also train the person group so that it can be used to identify a person face. The pictures of a person are stored in an azure storage account with relevant metadata.


Step 1: Trigger: Manually trigger the flow.


Step 2: Action: List blobs- List all the profile images stored in the storage account.


Step 3: Action: Create a person group- Create a person group to store the details of the persons to be identified. Provide a value for the Person Id and the Name of the group.


Step 4: Control: Apply to each loop- To loop over each image from the storage account.


//Loop Starts


Step 5: Action: Create SAS Uri by path- to create urls for the image for feeding into the Face API. Ensure that read permission is selected in the action as shown in the image.

Step 6: Action: Create a person- To create a person in the Face API. Provide the group ID from the Step 3 and provide the names of the persons that have to be created in the Face API. (Here I am providing just one static value)


Step 7: Action: Add person face- To add a face to the person created above. Provide the person group id from the Step 3, Person Id from Step 6 and the Image Url from Step 5.


//Apply to each loop ends

Step 8: Action: HTTP- To train the person group. Provide the endpoint from the Face API resource in azure and append the URI as shown in the image above to include the group name. Pass the access key for the Face API in the header of the request.

 

Flow 2: Face recognition: In this flow, we will take a look at adding the image captured in Power Apps to the storage account, detect faces using the Face API, identify the person and send a response to the Power App.


Step 1: Trigger: From Power Apps- To be triggered from a control in Power Apps.


Step 2: Action: Initialize a variable- To store the data sent from Power Apps (Image, name of image)


Step 3: Action: Initialize a variable- To store the response to be returned to Power Apps.


Step 4: Action: Parse JSON- To extract individual items received as a string from Power Apps.


JSON Schema:


{ "type": "array", "items": { "type": "object", "properties": { "Img": { "type": "string" }, "Name": { "type": "string" } }, "required": [ "Img", "Name" ] } }

Step 5: Action: Create a blob- To create an image file with the image provided from Power Apps in Azure Storage.


Blob name: first(body('Parse_JSON'))?['Name']


Blob content: datauritobinary(first(body('Parse_JSON'))?['Img'])


Step 6: Action: Create SAS Uri by path- to create urls for the image for feeding into the Face API. Ensure that read permission is selected in the action as shown in the image.



Step 7: Action: Detect faces- To detect the face from the image sent from Power Apps. Provide the image URL created from the blob storage in the previous actions.


Step 8: Action: HTTP- To invoke the Face API and identify the person from the person group. Provide the API endpoint and append the uri as shown in the screenshot to identify. Pass the access key in the request header and the face id, person group in the request body as shown in the image. Expression for face id: first(body('Detect_faces'))?['faceId']


Step 9: Parse JSON 2- To parse the output of the HTTP action.


JSON Schema:


{ "type": "array", "items": { "type": "object", "properties": { "faceId": { "type": "string" }, "candidates": { "type": "array", "items": { "type": "object", "properties": { "personId": { "type": "string" }, "confidence": { "type": "number" } }, "required": [ "personId", "confidence" ] } } }, "required": [ "faceId", "candidates" ] } }

Step 10: Action: Get a Person 2- To get the person from the Face API and confirm their identity. Provide the Person Group Id that was created earlier. Expression for the Person Id: first(body('Parse_JSON_2'))?['candidates'][0]?['personId']


Step 11: Control: Condition- To check if the name of the person matches with the name received as an input from Power Apps (Image name). Expression: first(split(first(body('Parse_JSON'))?['Name'],'.'))


Yes,


Action: Append to string variable: To set the success response to be sent to Power Apps.


No,


Action: Append to string variable: To set the failed response to be sent to Power Apps.

Step 12: Action: Response- To send the variable response based on the face identification to Power Apps.

 

Flow 3: Email authentication: In this flow, we will take a look at our second option for authentication in case the face recognition fails. We will send an email with options to the user and only if the approve the login from the email, they will be able to authenticate.


Step 1: Trigger: From Power Apps- To be triggered from a control in Power Apps.


Step 2: Action: Initialize a variable- To store the data sent from Power Apps (Email of the user)

Step 3: Action: Send email with options- Send an email with the option to click and authenticate. The selected option is set as a variable and sent to Power Apps.


Step 4: Action: Respond to a Power App or flow- Send the user selected option to the Power App to further perform actions and assess the authentication.

 

Canvas App


Now let's take a look at the canvas app where we will implement the face recognition based authentication mechanism.


Screen 1:

Step 1: In the "On Start" property of the app, use the expression: Set(emailvis, false);Set(camvis, false) to set the visibility of the camera control and the button for email authentication.


Step 2: This is a button control to navigate to the screen 2. Expression on the "OnSelect"property of this control is: Set(camvis, true) to set the visibility of the camera group as true.


Step 3: This is a button control that will be visible if the camera control based face authentication fails. Expression on the "OnSelect" property:


Set(opt,EmailAuth.Run(User().Email).selectedoption);If(opt="Login",Navigate(Screen2)&&Notify("Successfully Logged in",NotificationType.Success),Notify("Unable log you in. Contact administrator",NotificationType.Error))


Explanation: Run the email authentication flow and collect the user response in the "opt" variable. Check if opt is equal to "Login" and if yes then navigate to screen 2 and notify successfully logged in. Else, show the error notification message.


Step 4: This is a group that includes a label and a camera control. The visibility of this group depends on the press of the button in Step 1. The expression used on the "OnSelect" property of the camera control is:


ClearCollect(image,{Name:Concatenate(User().FullName,".jpg"),Img:Camera1.Photo}); Set(camauth,'PowerApp->Initializevariable'.Run(JSON(image,JSONFormat.IncludeBinaryData)).response);If(camauth="success",Navigate(Screen2)&&Notify("Successfully authenticated",NotificationType.Success),Notify("Unable to authenticate. Try other method",NotificationType.Error)&& Set(emailvis,true))


Explanation: Collect the image and the name of the user as the name of the image in the image collection. Pass the image collection to the flow using the JSON function. Set the variable camauth based on the response received from the flow and if that response is success then navigate to screen 2 and show the success message. Else, show error message and set the visibility to display the email authentication option.


Screen 2:

Step 1: This is a button control to navigate to the home screen. Expression on the "OnSelect" property of this control: Navigate(Screen1);Notify("Logged out",NotificationType.Warning)

 

Setup in Action

 

In this post, we saw how to create a face recognition based authentication system in Power Apps using Power Automate and Azure Cognitive Services. The same setup can be used across apps on screens where there might be sensitive data and only authorized users should access it. In such cases this setup serves as a 2 factor authentication mechanism.


I hope you found this interesting and it helped you. Thank you for reading!

Recent Posts

See All
bottom of page