Google Vision is a cloud for computing face detection. It is extremely strong but the drawback here is that it just supports API call over, so real-time face tracking is impossible with that. We consider the rest library because we can implement that inside Swift project, so real-time requirement can be fulfiled.
The library you should always think about when come to Computer Vision is Open CV. To implement OpenCV inside Swift, the setup is quite tedious. We should create C++ wrapper by Obj C and then call Obj C function inside Swift. With Haar Casacade face detection methods, openCV support detection of many kinds of face features such as nose, mouth, eyes. However, it turns out that the accuracy is very low.
DLib is a super powerful face detection library in which it supports facial land mark. The face shape, the chin curve can be detected using DLib. It is very big bonus because adding beard to face is a feature we really want to have and it can be easily with facial land mark detection. The drawback of DLib is the super complication in the setup with very heavy libraries dependencies. We did not choose it for the first prototype due to it complexity.
Swift Core Image. A nice solution for fast development. It is unbelievable easy to achieve face tracking with higher precision than OpenCV. However, its big disadvantage is that its functions are very limit and it returns mouth, eyes point position instead of bounding boxes. As a result, when apply turban and beard to face, they are not perfectly aligned. Because of the limit time, we still choose Swift Core Image as the main library for this project.
I think I overestimate myself a little bit. This task is quite hard for me. I should choose the better work around way that can reduce user experience to attain better accuracy