近几年随着移动设备硬件设备越来越优各种美颜相机App应运而生,美颜、瘦脸、添加挂件等等一系列的功能,这其中的原理一定离不开一个关键的技术,那就是人脸识别
一般的检测流程
(1) 人脸检测
检测图片中是否有人脸,或者有多少个人脸,同时会给出人脸的位置信息
(2) 人脸关键点检测
第一步我们找出来图中是否有人脸的信息,然后通过人脸的位置,与图片信息,获取人脸的关键点
(3) 处理信息
通过关键点,来做一些你需要的东西
扫盲:什么是关键点
我们来看一张图
这张图通过 68 个点描述了人脸的轮廓,这 68 个点 就是关键点,也有 5 个点的关键点和其他的规格;
人脸检测
今天我们来通过iOS系统本身的AVFoundation 框架 来检测视频流中出现的人脸,并把检测出来的框绘制到视频流中,我们先看一下效果是什么样子的
Mars 可能太酷 都检测不到!
原料
AVFoundation
opencv2.framework 下载opencv2
ps:opencv 有的库带有iOS 用的一些方法 有的版本不带,我忘记了 大家自行下载查阅,没有的话也可以自己写方法,主要是做转换用的,你的controller的.m文件要换成.mm#import"ViewController.h"
#import
#import
#import
#import
#import
@interfaceViewController()
@property(nonatomic,strong)AVCaptureSession*session;
@property(nonatomic,strong)UIImageView*cameraView;
@property(nonatomic,strong)dispatch_queue_tsample;
@property(nonatomic,strong)dispatch_queue_tfaceQueue;
@property(nonatomic,copy)NSArray*currentMetadata;//?
@end
@implementationViewController
-(void)viewDidLoad{
[superviewDidLoad];
_currentMetadata=[NSMutableArrayarrayWithCapacity:0];
[self.viewaddSubview:self.cameraView];
_sample=dispatch_queue_create("sample",NULL);
_faceQueue=dispatch_queue_create("face",NULL);
NSArray*devices=[AVCaptureDevicedevicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice*deviceF;
for(AVCaptureDevice*deviceindevices)
{
if(device.position==AVCaptureDevicePositionFront)
{
deviceF=device;
break;
}
}
AVCaptureDeviceInput*input=[[AVCaptureDeviceInputalloc]initWithDevice:deviceFerror:nil];
AVCaptureVideoDataOutput*output=[[AVCaptureVideoDataOutputalloc]init];
[outputsetSampleBufferDelegate:selfqueue:_sample];
AVCaptureMetadataOutput*metaout=[[AVCaptureMetadataOutputalloc]init];
[metaoutsetMetadataObjectsDelegate:selfqueue:_faceQueue];
self.session=[[AVCaptureSessionalloc]init];
[self.sessionbeginConfiguration];
if([self.sessioncanAddInput:input]){
[self.sessionaddInput:input];
}
if([self.sessioncanSetSessionPreset:AVCaptureSessionPreset640x480]){
[self.sessionsetSessionPreset:AVCaptureSessionPreset640x480];
}
if([self.sessioncanAddOutput:output]){
[self.sessionaddOutput:output];
}
if([self.sessioncanAddOutput:metaout]){
[self.sessionaddOutput:metaout];
}
[self.sessioncommitConfiguration];
NSString*key=(NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber*value=[NSNumbernumberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary*videoSettings=[NSDictionarydictionaryWithObject:valueforKey:key];
[outputsetVideoSettings:videoSettings];
//这里我们告诉要检测到人脸就给我一些反应,里面还有QRCode等都可以放进去,就是如果视频流检测到了你要的就会出发下面第二个代理方法
[metaoutsetMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
AVCaptureSession*session=(AVCaptureSession*)self.session;
//前置摄像头一定要设置一下要不然画面是镜像
for(AVCaptureVideoDataOutput*outputinsession.outputs){
for(AVCaptureConnection*avinoutput.connections){
//判断是否是前置摄像头状态
if(av.supportsVideoMirroring){
//镜像设置
av.videoOrientation=AVCaptureVideoOrientationPortrait;
av.videoMirrored=YES;
}
}
}
[self.sessionstartRunning];
}
#pragmamark-AVCaptureSessionDelegate-
-(void)captureOutput:(AVCaptureOutput*)outputdidOutputSampleBuffer:(CMSampleBufferRef)sampleBufferfromConnection:(AVCaptureConnection*)connection
{
NSMutableArray*bounds=[NSMutableArrayarrayWithCapacity:0];
//每一帧,我们都看一下self.currentMetadata里面有没有东西,然后将里面的
//AVMetadataFaceObject转换成AVMetadataObject,其中AVMetadataObject的bouns就是人脸的位置,我们将bouns存到数组中
for(AVMetadataFaceObject*faceobjectinself.currentMetadata){
AVMetadataObject*face=[outputtransformedMetadataObjectForMetadataObject:faceobjectconnection:connection];
[boundsaddObject:[NSValuevalueWithCGRect:face.bounds]];
}
}
-(void)captureOutput:(AVCaptureOutput*)outputdidOutputMetadataObjects:(NSArray<__kindof>*)metadataObjectsfromConnection:(AVCaptureConnection*)connection
{
//当检测到了人脸会走这个回调
_currentMetadata=metadataObjects;
}
-(UIImage*)imageFromPixelBuffer:(CMSampleBufferRef)p{
CVImageBufferRefbuffer;
buffer=CMSampleBufferGetImageBuffer(p);
CVPixelBufferLockBaseAddress(buffer,0);
uint8_t*base;
size_twidth,height,bytesPerRow;
base=(uint8_t*)CVPixelBufferGetBaseAddress(buffer);
width=CVPixelBufferGetWidth(buffer);
height=CVPixelBufferGetHeight(buffer);
bytesPerRow=CVPixelBufferGetBytesPerRow(buffer);
CGColorSpaceRefcolorSpace;
CGContextRefcgContext;
colorSpace=CGColorSpaceCreateDeviceRGB();
cgContext=CGBitmapContextCreate(base,width,height,8,bytesPerRow,colorSpace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
CGImageRefcgImage;
UIImage*image;
cgImage=CGBitmapContextCreateImage(cgContext);
image=[UIImageimageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(cgContext);
CVPixelBufferUnlockBaseAddress(buffer,0);
returnimage;
}
-(UIImageView*)cameraView
{
if(!_cameraView){
_cameraView=[[UIImageViewalloc]initWithFrame:self.view.bounds];
//不拉伸
_cameraView.contentMode=UIViewContentModeScaleAspectFill;
}
return_cameraView;
}
注意的地方
1.output的设置一定在添加之后
2.info.plist 要设置相机权限 Privacy - Camera Usage Description
现在我们视频流拿到了 但是还没有显示出来,下面我们会通过opencv 将人脸框绘制在视频流上,并通过UIImageView 将 处理后的图像显示出来
将人脸框绘制到显示的视频流上
(1). 转换
我们先写一个方法 将CMSampleBufferRef 转换成 UIImage(其实也可以直接CMSampleBufferRef 转换成cv::Mat)-(UIImage*)imageFromPixelBuffer:(CMSampleBufferRef)p{
CVImageBufferRefbuffer;
buffer=CMSampleBufferGetImageBuffer(p);
CVPixelBufferLockBaseAddress(buffer,0);
uint8_t*base;
size_twidth,height,bytesPerRow;
base=(uint8_t*)CVPixelBufferGetBaseAddress(buffer);
width=CVPixelBufferGetWidth(buffer);
height=CVPixelBufferGetHeight(buffer);
bytesPerRow=CVPixelBufferGetBytesPerRow(buffer);
CGColorSpaceRefcolorSpace;
CGContextRefcgContext;
colorSpace=CGColorSpaceCreateDeviceRGB();
cgContext=CGBitmapContextCreate(base,width,height,8,bytesPerRow,colorSpace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
CGImageRefcgImage;
UIImage*image;
cgImage=CGBitmapContextCreateImage(cgContext);
image=[UIImageimageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(cgContext);
CVPixelBufferUnlockBaseAddress(buffer,0);
returnimage;
}
(2).绘制
我们在继续在 AVCaptureVideoDataOutputSampleBufferDelegate 去处理视频流,已经可以拿到 有关人脸的信息了 我们直接绘制上去就可以了-(void)captureOutput:(AVCaptureOutput*)outputdidOutputSampleBuffer:(CMSampleBufferRef)sampleBufferfromConnection:(AVCaptureConnection*)connection
{
NSMutableArray*bounds=[NSMutableArrayarrayWithCapacity:0];
for(AVMetadataFaceObject*faceobjectinself.currentMetadata){
AVMetadataObject*face=[outputtransformedMetadataObjectForMetadataObject:faceobjectconnection:connection];
[boundsaddObject:[NSValuevalueWithCGRect:face.bounds]];
}
//转换成UIImage
UIImage*image=[selfimageFromPixelBuffer:sampleBuffer];
cv::Matmat;
//转换成cv::Mat
UIImageToMat(image,mat);
for(NSValue*rectinbounds){
CGRectr=[rectCGRectValue];
//画框
cv::rectangle(mat,cv::Rect(r.origin.x,r.origin.y,r.size.width,r.size.height),cv::Scalar(255,0,0,1));
}
//这里不考虑性能直接怼Image
dispatch_async(dispatch_get_main_queue(),^{
self.cameraView.image=MatToUIImage(mat);
});
}
忘记一个东西 依赖库 大家别忘记 点一波