在 iOS 上使用 Dlib 检测人脸关键点

3,657 阅读7分钟

本文将介绍一个在 iOS 上使用 Dlib 检测人脸特征点的例子。包含编译 Dlib 库,视频流人脸关键点检测,相片人脸关键点检测。效果如下图所示

demo.gif

1. Dlib 简介

Dlib 是一个现代的 C ++ 工具包,包含机器学习算法和工具,用于在 C ++ 中创建复杂的软件来解决实际问题。有关主项目文档和API参考,请参见官网 dlib.net 或 GitHub github.com/davisking/d…

2. 在 Xcode 上编译 Dlib

这一步我们需要将 Dlib 在 Xcode 上编译成静态库。首先,我们下载 Dlib 的源码。直接使用别人编译好的Dlib库,可以直接跳到下一步。要完成这一步,有以下要求:

  • X11,如果没有安装 点击下载
  • Xcode
  • cmake,如果没安装,可以通过 homebrew 安装
2.1 下载源码

在 Dlib 的 GitHub 仓库 github.com/davisking/d… 下载源码

2.2 创建 Dlib 的 Xcode 编译项目。进入下载 Dlib 源码的根目录,执行命令
cd examples/

mkdir build

cd build

cmake -G Xcode ..

cmake --build . --config Release

build 目录会有一个 examples.xcodeproj 和 dlib_build 文件夹,如图所示

001.jpg

进入 dlib 目录,打开 dlib.xcodeproj,将 dlib 项目的设置改为下图所示

002.jpg

003.jpg

选择 dlib target,查看设置选项,确保跟 dlib 项目的设置跟 dlib.xcodeproj 的设置一致,如下图所示

004.jpg

选择 dlib target,分别编译 x86 静态库和 arm 静态库,如下图所示

编译 x86 静态库

005.jpg

编译 arm 静态库

006.jpg

选择项目导航栏的 Products 下编译好的 dlib 静态库,右键找到所在的文件夹

007.jpg

008.jpg

回到上一目录,可以看到模拟器静态库文件夹和真机静态库文件夹

009.jpg

我们需要的 dlib 静态库就编译好了

3. 创建检测人脸关键点 iOS App

3.1 创建一个 Xcode 项目命名为 DlibDemo

在项目根目录下新建一个文件夹,命名为 Dlib。将编译好的 dlib 静态库拷贝到这个目录,Lib-iphoneos 目录下存放编译好的 arm 静态库,Lib-iphonesimulator 目录下存放编译好的 x86 静态库;将 dlib-master 目录下的 dlib 文件夹也拷贝到这个目录;下载模型shape_predictor_68_face_landmarks.dat,将这个模型也拷贝到这个目录。如下图所示

010.jpg

右键选择 Add Files to“DlibDemo”...,将这个文件夹加入项目里。

011.jpg

然后右键选中 dlib 这个文件夹,选择 delete,然后选择 Remove References。从项目里移除,注意不是删除。因为项目里面将添加头文件搜索路径到 dlib 目录,所以 dlib 目录不需要添加到项目里,如果添加了会报错。将 Lib-iphonesimulator 和 Lib-iphoneos 按照相同方法从项目里移除,项目里面会设置搜索静态库 dlib.a 的路径。此时项目里 Dlib 这个文件夹下,只剩下 shape_predictor_68_face_landmarks.dat

012.jpg

3.2 设置编译选项

设置头文件查找路径, HEADER_SEARCH_PATHS 为 $(PROJECT_DIR)/DlibDemo/Dlib/,用来查找 Dlib 的头文件

013.jpg

设置 LIBRARY_SEARCH_PATHS 查找路径为 $(SRCROOT)/DlibDemo/Dlib/Lib$(EFFECTIVE_PLATFORM_NAME),其中 $(EFFECTIVE_PLATFORM_NAME) 为 Xcode 自带的宏,当模拟器编译时,它的值为 -iphonesimulator,则 Lib$(EFFECTIVE_PLATFORM_NAME) 的值为 Lib-iphonesimulator,对应 Dlib 的 x86 静态库文件夹;当真机编译时,它的值为 -iphoneos,则 Lib$(EFFECTIVE_PLATFORM_NAME) 的值为 Lib-iphoneos,对应 Dlib 的 arm 静态库文件夹。

014.jpg

设置 OTHER_LDFLAGS,增加 -l"dlib"

015.jpg

设置 OTHER_CFLAGS = -DNDEBUG -DDLIB_JPEG_SUPPORT -DDLIB_USE_BLAS -DDLIB_USE_LAPACK -DLAPACK_FORCE_UNDERSCORE

016.jpg

如果为全新项目,创建 DlibDemo/DlibDemo-Bridging-Header.h,设置 SWIFT_OBJC_BRIDGING_HEADER = DlibDemo/DlibDemo-Bridging-Header.h

017.jpg

设置 Debug 环境编译优化为 Fastest, Smallest[-Os]。我在此处卡了很久,如果不设置,检测过程将非常非常慢,需要非常长的时间。Release 模式已经默认为 Fastest, Smallest[-Os] 了,所以不需要设置。在调试完成可以将 Debug 模式的编译优化修改回 None[-O0],避免影响 Debug。

018.jpg

添加依赖的 FrameWork,Accelerate.framework

020.jpg

4. 在项目中使用 Dlib

4.1 对 Dlib 进行 Wrapper

由于 Dlib 为 C++ 编写,我们需要新建 DlibWrapper.h 和 DlibWrapper.mm 文件,对 Dlib 进行 Wrapper。方法为 DlibWrapper.h 暴露方法名,不要引入任何 Dlib 相关的头文件,否则凡是引入 DlibWrapper.h 的文件,实现文件都要改为 .mm;DlibWrapper.mm 引入 Dlib 相关的头文件,并实现相关方法。

DlibWrapper.h 的代码如下

#import <Foundation/Foundation.h>
#import <CoreMedia/CoreMedia.h>

@interface DlibWrapper : NSObject

- (instancetype)init;
- (void)prepare;
- (void)doWorkOnSampleBuffer:(CMSampleBufferRef)sampleBuffer inRects:(NSArray<NSValue *> *)rects;
- (void)doWorkOnImagePath:(NSString*)imagePath savePath:(NSString*)savePath;
@end

DlibWrapper.mm 的代码如下

#import "DlibWrapper.h"
#import <UIKit/UIKit.h>

#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing.h>
#include <dlib/image_io.h>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>

@interface DlibWrapper ()

@property (assign) BOOL prepared;

+ (std::vector<dlib::rectangle>)convertCGRectValueArray:(NSArray<NSValue *> *)rects;

@end
@implementation DlibWrapper {
    dlib::shape_predictor sp;
    dlib::frontal_face_detector detector;
}


- (instancetype)init {
    self = [super init];
    if (self) {
        _prepared = NO;
    }
    return self;
}

- (void)prepare {
    NSString *modelFileName = [[NSBundle mainBundle] pathForResource:@"shape_predictor_68_face_landmarks" ofType:@"dat"];
    std::string modelFileNameCString = [modelFileName UTF8String];
    
    dlib::deserialize(modelFileNameCString) >> sp;
    detector = dlib::get_frontal_face_detector();

    // FIXME: test this stuff for memory leaks (cpp object destruction)
    self.prepared = YES;
}

- (void)doWorkOnSampleBuffer:(CMSampleBufferRef)sampleBuffer inRects:(NSArray<NSValue *> *)rects {
    
    if (!self.prepared) {
        [self prepare];
    }
    
    dlib::array2d<dlib::bgr_pixel> img;
    
    // MARK: magic
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);

    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    char *baseBuffer = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
    
    // set_size expects rows, cols format
    img.set_size(height, width);
    
    // copy samplebuffer image data into dlib image format
    img.reset();
    long position = 0;
    while (img.move_next()) {
        dlib::bgr_pixel& pixel = img.element();

        // assuming bgra format here
        long bufferLocation = position * 4; //(row * width + column) * 4;
        char b = baseBuffer[bufferLocation];
        char g = baseBuffer[bufferLocation + 1];
        char r = baseBuffer[bufferLocation + 2];
        //        we do not need this
        //        char a = baseBuffer[bufferLocation + 3];
        
        dlib::bgr_pixel newpixel(b, g, r);
        pixel = newpixel;
        
        position++;
    }
    
    // unlock buffer again until we need it again
    CVPixelBufferUnlockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);
    
    // convert the face bounds list to dlib format
    std::vector<dlib::rectangle> convertedRectangles = [DlibWrapper convertCGRectValueArray:rects];
    
    // for every detected face
    for (unsigned long j = 0; j < convertedRectangles.size(); ++j)
    {
        dlib::rectangle oneFaceRect = convertedRectangles[j];
        
        // detect all landmarks
        dlib::full_object_detection shape = sp(img, oneFaceRect);
        
        // and draw them into the image (samplebuffer)
        for (unsigned long k = 0; k < shape.num_parts(); k++) {
            dlib::point p = shape.part(k);
            draw_solid_circle(img, p, 2, dlib::rgb_pixel(0, 255, 0));
        }
    }
    
    // lets put everything back where it belongs
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // copy dlib image data back into samplebuffer
    img.reset();
    position = 0;
    while (img.move_next()) {
        dlib::bgr_pixel& pixel = img.element();
        
        // assuming bgra format here
        long bufferLocation = position * 4; //(row * width + column) * 4;
        baseBuffer[bufferLocation] = pixel.blue;
        baseBuffer[bufferLocation + 1] = pixel.green;
        baseBuffer[bufferLocation + 2] = pixel.red;
        //        we do not need this
        //        char a = baseBuffer[bufferLocation + 3];
        
        position++;
    }
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}

- (void)doWorkOnImagePath:(NSString*)imagePath savePath:(NSString*)savePath {
    if (!self.prepared) {
        return;
    }
    
    std::string fileName = [imagePath UTF8String];
    //creat image
    dlib::array2d<dlib::rgb_pixel> img;
    
    //load ios image
    dlib::load_image(img,fileName);
    
    //dlib人脸识别
    std::vector<dlib::rectangle> dets = detector(img);
    NSLog(@"人脸个数 %lu",dets.size());//检测到人脸的数量
    
    for (unsigned long j = 0; j < dets.size(); ++j) {
        dlib::full_object_detection shape = sp(img, dets[j]);
        // and draw them into the image (samplebuffer)
        for (unsigned long k = 0; k < shape.num_parts(); k++) {
            dlib::point p = shape.part(k);
            // p 点的直径 2 参数为原点直径 rgb_pixel 颜色
            dlib::draw_solid_circle(img, p, 2, dlib::rgb_pixel(0, 255, 0));
        }
    }
    dlib::save_jpeg(img, [savePath UTF8String]);
}

+ (std::vector<dlib::rectangle>)convertCGRectValueArray:(NSArray<NSValue *> *)rects {
    std::vector<dlib::rectangle> myConvertedRects;
    for (NSValue *rectValue in rects) {
        CGRect rect = [rectValue CGRectValue];
        long left = rect.origin.x;
        long top = rect.origin.y;
        long right = left + rect.size.width;
        long bottom = top + rect.size.height;
        dlib::rectangle dlibRect(left, top, right, bottom);

        myConvertedRects.push_back(dlibRect);
    }
    return myConvertedRects;
}
@end

在 DlibDemo-Bridging-Header.h 添加 #import "DlibWrapper.h"

#import "DlibWrapper.h"

4.2 编写视频流人脸关键点检测代码

首先新建 SessionHandler.swift 用来获取视频流,并对每一帧视频,调用 DlibWrapper 进行人脸关键点检测,代码如下

import AVFoundation

class SessionHandler : NSObject, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureMetadataOutputObjectsDelegate {
    var session = AVCaptureSession()
    let layer = AVSampleBufferDisplayLayer()
    let sampleQueue = DispatchQueue(label: "com.zweigraf.DisplayLiveSamples.sampleQueue", attributes: [])
    let faceQueue = DispatchQueue(label: "com.zweigraf.DisplayLiveSamples.faceQueue", attributes: [])
    let wrapper = DlibWrapper()
    
    var currentMetadata: [AnyObject]
    
    override init() {
        currentMetadata = []
        super.init()
    }
    
    func openSession() {
        var device = AVCaptureDevice.devices(for: AVMediaType.video)
            .map { $0 }
            .filter { $0.position == .front}
            .first
        if device == nil {
            return
        }
        
        let input = try! AVCaptureDeviceInput(device: device!)
        
        let output = AVCaptureVideoDataOutput()
        output.setSampleBufferDelegate(self, queue: sampleQueue)
        
        let metaOutput = AVCaptureMetadataOutput()
        metaOutput.setMetadataObjectsDelegate(self, queue: faceQueue)
    
        session.beginConfiguration()
        
        if session.canAddInput(input) {
            session.addInput(input)
        }
        if session.canAddOutput(output) {
            session.addOutput(output)
        }
        if session.canAddOutput(metaOutput) {
            session.addOutput(metaOutput)
        }
        
        session.commitConfiguration()
        
        let settings: [AnyHashable: Any] = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: Int(kCVPixelFormatType_32BGRA)]
        output.videoSettings = settings as! [String : Any]
    
        // availableMetadataObjectTypes change when output is added to session.
        // before it is added, availableMetadataObjectTypes is empty
        metaOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.face]
        
        wrapper?.prepare()
        
        session.startRunning()
        
        for output in session.outputs {
            for av in output.connections {
                if av.isVideoMirroringSupported {
                    av.videoOrientation = .portrait
                    av.isVideoMirrored = true
                }
            }
        }
        layer.videoGravity = AVLayerVideoGravity.resizeAspectFill

    }
    
    // MARK: AVCaptureVideoDataOutputSampleBufferDelegate
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

        if !currentMetadata.isEmpty {
            let boundsArray = currentMetadata
                .flatMap { $0 as? AVMetadataFaceObject }
                .map { (faceObject) -> NSValue in
                    let convertedObject = output.transformedMetadataObject(for: faceObject, connection: connection)
                    return NSValue(cgRect: convertedObject!.bounds)
            }
            
            wrapper?.doWork(on: sampleBuffer, inRects: boundsArray)
        }

        layer.enqueue(sampleBuffer)
    }
    
    func captureOutput(_ output: AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        print("DidDropSampleBuffer")
    }
    
    // MARK: AVCaptureMetadataOutputObjectsDelegate
    
    func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
        currentMetadata = metadataObjects as [AnyObject]
    }
}

新建 VideoScanViewController.swift,使用 SessionHandler,代码如下

import UIKit

class VideoScanViewController: UIViewController {
    let sessionHandler = SessionHandler()
    
    lazy var preview: UIView = {
        let view = UIView()
        return view
    }()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        self.navigationItem.title = "视频流检测人脸特征点"
        self.view.backgroundColor = .white
        self.view.addSubview(preview)
        preview.frame = CGRect(x: 0, y: 0, width: self.view.frame.width, height: self.view.frame.height)
    }
    
    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }
    
    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        sessionHandler.openSession()
        let layer = sessionHandler.layer
        layer.frame = preview.bounds
        preview.layer.addSublayer(layer)
        view.layoutIfNeeded()
    }
}

4.3 编写图片人脸关键点检测代码

新建 AlbumViewController.swift,使用 DlibWrapper 对相片进行人脸关键点检测,代码如下

import UIKit

class AlbumViewController: UIViewController {

    lazy var picker = UIImagePickerController()
    
    lazy var imageView: UIImageView = {
        let imageView = UIImageView()
        imageView.contentMode = UIView.ContentMode.scaleAspectFit
        return imageView
    }()
    
    lazy var wrapper = DlibWrapper()

    var filePath = ""
    var filePathWrite = ""

    override func viewDidLoad() {
        super.viewDidLoad()
        self.view.backgroundColor = .white
        self.navigationItem.rightBarButtonItem = UIBarButtonItem.init(title: "相册", style: .plain, target: self, action: #selector(albumClick(_:)))
        self.view.addSubview(imageView)
        imageView.frame = self.view.bounds
        
        let cachePath = NSSearchPathForDirectoriesInDomains(.cachesDirectory, .userDomainMask, true).first!
        filePath = (cachePath as NSString).appendingPathComponent("DlibCacheFileRead.jpg")
        filePathWrite = (cachePath as NSString).appendingPathComponent("DlibCacheFileWrite.jpg")
        wrapper?.prepare()
    }
    
    @objc func albumClick(_ button: UIButton) {
        let sourceType = UIImagePickerController.SourceType.photoLibrary
        picker.delegate = self
        picker.sourceType = sourceType
        self.present(picker, animated: true, completion: nil)
    }
}

extension AlbumViewController: UIImagePickerControllerDelegate,UINavigationControllerDelegate {
    
    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        let image = info[UIImagePickerController.InfoKey.originalImage]
        picker.dismiss(animated: true, completion: nil)
        DispatchQueue.main.async { [weak self] in
            if let image = image as? UIImage, let filePath = self?.filePath, let filePathWrite = self?.filePathWrite  {
                let imageData = image.jpegData(compressionQuality: 1.0)
                try? imageData?.write(to: URL(fileURLWithPath: filePath))
                self?.wrapper?.doWork(onImagePath: filePath, savePath: filePathWrite)
                let detectImage = UIImage.init(contentsOfFile: filePathWrite)
                self?.imageView.image = detectImage
            }
        }
    }
    
    func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
        
        picker.dismiss(animated: true, completion: nil)
    }
}

5. 运行结果

021.jpg

这份代码可以在模拟器和真机上编译运行。但模拟器没有摄像头,视频流人脸关键点检测需要在真机才能使用。

6. 小结

这篇文章介绍了,编译 Dlib 库,并且在 Xcode 项目中使用的例子,包含视频流人脸关键点检测,相片人脸关键点检测。涉及到静态库的编译和使用知识,Swift 与 C++ 混编知识,AVFoundation 知识,涉及的内容和注意点较多。如果需要在实际 App 中使用,需要考虑模型压缩,视频流优化,性能优化, BitCode 等问题。如果有问题可以关注我的公众号,留言和我交流,大家一起进步。如需源码,公众号回复 iOS。

欢迎扫码关注公众号,一起讨论交流