I have for a long time have interest for Artificial Intelligence (AI) & Robotics, but didn’t make the time to study this field. Recently I decided it’s about time to start investing time in this topic. I’m always fan of learning by doing and my first attempt is to create an AI that differentiate my from other people.
I use 205 photos of people faces, 103 of these are my own face, to train the AI. Another 45 photos (5 of me) are use for testing the model.
The prediction results from AI model are shown below. The model correctly predict 40 of 45 photos which indicates it has an accuracy about 89%.
Blue (NOT ME) and Green (ME) are correct predictions.
Red are false predictions.
The fun part would to see whether I could applied the AI for the “real world” usage and see how good the performance would be. To make thing simple I create simple web-application, which can capture an image from the web-camera and then predict the image. I test the AI in three stages.
- The first stage is to live capture myself.
- The second stage is to show the AI photo of me, which are one month old and hasn’t been used during the training.
- The third stage is to find some random photo of people, both men and women, with and without glasses, from the Internet and show these to the AI.
Total 26 photos are use for testing and the result gives an accuracy about 80%. Stats and photos from the live testing are shown below.
|Image||Correct Prediction||False Prediction||Total Image|
During the training I notice that there are some unstability which I had foreseen could appear. Because of this unstability I think the AI could in some cases have an accuracy down to around 50 – 60%. One of the reason that may lead to this unstability is the dataset used for training. The dataset contains 205 photos which are too low in my opinion. Ideally there should be at least 500 photos of me and many more of other people.
The next stage is to learn how the AI model clould be improved.
The application can be accessed from this link: MyFaceRecognition webapp.