Vision框架详细解析(五) —— 在iOS中使用Vision和Metal进行照片堆叠(二)

版本记录

版本号 时间
V1.0 2019.07.23 星期二

前言

iOS 11+macOS 10.13+ 新出了Vision框架,提供了人脸识别、物体检测、物体跟踪等技术,它是基于Core ML的。可以说是人工智能的一部分,接下来几篇我们就详细的解析一下Vision框架。感兴趣的看下面几篇文章。
1. Vision框架详细解析(一) —— 基本概览(一)
2. Vision框架详细解析(二) —— 基于Vision的人脸识别(一)
3. Vision框架详细解析(三) —— 基于Vision的人脸识别(二)
4. Vision框架详细解析(四) —— 在iOS中使用Vision和Metal进行照片堆叠(一)

源码

1. Swift

首先看下工程组织结构

下面看下sb中的内容

接着就看下代码了

1. RecordButton.swift
import UIKit

@IBDesignable
class RecordButton: UIButton {
  var progress: CGFloat = 0.0 {
    didSet {
      DispatchQueue.main.async {
        self.setNeedsDisplay()
      }
    }
  }
  
  override func draw(_ rect: CGRect) {
    // General Declarations
    let context = UIGraphicsGetCurrentContext()!
    
    // Resize to Target Frame
    context.saveGState()    
    
    context.translateBy(x: bounds.minX, y: bounds.minY)
    context.scaleBy(x: bounds.width / 218, y: bounds.height / 218)

    // Color Declarations
    let red = UIColor(red: 0.949, green: 0.212, blue: 0.227, alpha: 1.000)
    let white = UIColor(red: 0.996, green: 1.000, blue: 1.000, alpha: 1.000)
    
    // Variable Declarations
    let expression: CGFloat = -progress * 360
    
    // Button Drawing
    let buttonPath = UIBezierPath(ovalIn: CGRect(x: 26, y: 26, width: 166, height: 166))
    red.setFill()
    buttonPath.fill()
    
    
    // Ring Background Drawing
    let ringBackgroundPath = UIBezierPath(ovalIn: CGRect(x: 8.5, y: 8.5, width: 200, height: 200))
    white.setStroke()
    ringBackgroundPath.lineWidth = 19
    ringBackgroundPath.lineCapStyle = .round
    ringBackgroundPath.stroke()
    
    
    // Progress Ring Drawing
    let progressRingRect = CGRect(x: 8.5, y: 8.5, width: 200, height: 200)
    let progressRingPath = UIBezierPath()
    progressRingPath.addArc(withCenter: CGPoint(x: progressRingRect.midX, y: progressRingRect.midY), radius: progressRingRect.width / 2, startAngle: -90 * CGFloat.pi/180, endAngle: -(expression + 90) * CGFloat.pi/180, clockwise: true)
    
    red.setStroke()
    progressRingPath.lineWidth = 19
    progressRingPath.lineCapStyle = .round
    progressRingPath.stroke()
    
    context.restoreGState()
  }
  
  func resetProgress() {
    progress = 0.0
  }
}
2. AverageStacking.metal
#include <metal_stdlib>
using namespace metal;
#include <CoreImage/CoreImage.h>

extern "C" { namespace coreimage {
  float4 avgStacking(sample_t currentStack, sample_t newImage, float stackCount) {
    float4 avg = ((currentStack * stackCount) + newImage) / (stackCount + 1.0);
    avg = float4(avg.rgb, 1);
    return avg;
  }
}}
3. AverageStackingFilter.swift
import CoreImage

class AverageStackingFilter: CIFilter {
  let kernel: CIBlendKernel
  
  var inputCurrentStack: CIImage?
  var inputNewImage: CIImage?
  var inputStackCount = 1.0
  
  override init() {
    guard let url = Bundle.main.url(forResource: "default", withExtension: "metallib") else {
      fatalError("Check your build settings.")
    }
    
    do {
      let data = try Data(contentsOf: url)
      kernel = try CIBlendKernel(functionName: "avgStacking", fromMetalLibraryData: data)
    } catch {
      print(error.localizedDescription)
      fatalError("Make sure the function names match")
    }
    
    super.init()
  }
    
  required init?(coder aDecoder: NSCoder) {
    fatalError("init(coder:) has not been implemented")
  }

  func outputImage() -> CIImage? {
    guard
      let inputCurrentStack = inputCurrentStack,
      let inputNewImage = inputNewImage
      else {
        return nil
    }
    return kernel.apply(extent: inputCurrentStack.extent, arguments: [inputCurrentStack, inputNewImage, inputStackCount])
  }
}
4. CIImageExtension.swift
import CoreImage

extension CIImage {
  func cgImage() -> CGImage? {
    if cgImage != nil {
      return cgImage
    }
    return CIContext().createCGImage(self, from: extent)
  }
}
5. CameraViewController.swift
import AVFoundation
import UIKit

class CameraViewController: UIViewController {
  @IBOutlet var previewView: UIView!
  @IBOutlet var containerView: UIView!
  @IBOutlet var combinedImageView: UIImageView!
  @IBOutlet var recordButton: RecordButton!
  var previewLayer: AVCaptureVideoPreviewLayer!
  let session = AVCaptureSession()
  var saver: ImageSaver?
  let imageProcessor = ImageProcessor()
  var isRecording = false
  let maxFrameCount = 20
  
  override func viewDidLoad() {
    super.viewDidLoad()
    containerView.isHidden = true
    configureCaptureSession()
    session.startRunning()
  }
}

// MARK: - Configuration Methods

extension CameraViewController {
  func configureCaptureSession() {
    guard let camera = AVCaptureDevice.default(for: .video) else {
      fatalError("No video camera available")
    }
    do {
      let cameraInput = try AVCaptureDeviceInput(device: camera)
      session.addInput(cameraInput)
      try camera.lockForConfiguration()
      camera.activeVideoMinFrameDuration = CMTime(value: 1, timescale: 5)
      camera.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: 5)
      camera.unlockForConfiguration()
    } catch {
      fatalError(error.localizedDescription)
    }
    // Define where the video output should go
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "video data queue"))
    //videoOutput.alwaysDiscardsLateVideoFrames = true
    videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
    // Add the video output to the capture session
    session.addOutput(videoOutput)
    let videoConnection = videoOutput.connection(with: .video)
    videoConnection?.videoOrientation = .portrait
    // Configure the preview layer
    previewLayer = AVCaptureVideoPreviewLayer(session: session)
    previewLayer.videoGravity = .resizeAspectFill
    previewLayer.frame = view.bounds
    previewView.layer.addSublayer(previewLayer)
  }
}

// MARK: - UI Methods

extension CameraViewController {
  @IBAction func recordTapped(_ sender: UIButton) {
    recordButton.isEnabled = false
    isRecording = true
    saver = ImageSaver()
  }
  
  @IBAction func closeButtonTapped(_ sender: UIButton) {
    containerView.isHidden = true
    recordButton.isEnabled = true
    session.startRunning()
  }

  func stopRecording() {
    isRecording = false
    recordButton.progress = 0.0
  }
  
  func displayCombinedImage(_ image: CIImage) {
    session.stopRunning()
    combinedImageView.image = UIImage(ciImage: image)
    containerView.isHidden = false
  }
}

// MARK: - Capture Video Data Delegate Methods

extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
  func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    if !isRecording {
      return
    }
    guard
      let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
      let cgImage = CIImage(cvImageBuffer: imageBuffer).cgImage()
      else {
        return
    }
    let image = CIImage(cgImage: cgImage)
    imageProcessor.add(image)
    saver?.write(image)
    let currentFrame = recordButton.progress * CGFloat(maxFrameCount)
    recordButton.progress = (currentFrame + 1.0) / CGFloat(maxFrameCount)
    if recordButton.progress >= 1.0 {
      stopRecording()
      imageProcessor.processFrames(completion: displayCombinedImage)
    }
  }
}
6. ImageProcessor.swift
import CoreImage
import Vision

class ImageProcessor {
  var frameBuffer: [CIImage] = []
  var alignedFrameBuffer: [CIImage] = []
  var completion: ((CIImage) -> Void)?
  var isProcessingFrames = false
  var frameCount: Int {
    return frameBuffer.count
  }
  
  func add(_ frame: CIImage) {
    if isProcessingFrames {
      return
    }
    frameBuffer.append(frame)
  }
  
  func processFrames(completion: ((CIImage) -> Void)?) {
    isProcessingFrames = true
    self.completion = completion
    let firstFrame = frameBuffer.removeFirst()
    alignedFrameBuffer.append(firstFrame)
    for frame in frameBuffer {
      let request = VNTranslationalImageRegistrationRequest(targetedCIImage: frame)
      do {
        let sequenceHandler = VNSequenceRequestHandler()
        try sequenceHandler.perform([request], on: firstFrame)
      } catch {
        print(error.localizedDescription)
      }
      alignImages(request: request, frame: frame)
    }
    combineFrames()
  }
  
  func alignImages(request: VNRequest, frame: CIImage) {
    guard
      let results = request.results as? [VNImageTranslationAlignmentObservation],
      let result = results.first
      else {
        return
    }
    let alignedFrame = frame.transformed(by: result.alignmentTransform)
    alignedFrameBuffer.append(alignedFrame)
  }
  
  func combineFrames() {
    var finalImage = alignedFrameBuffer.removeFirst()
    let filter = AverageStackingFilter()
    for (i, image) in alignedFrameBuffer.enumerated() {
      filter.inputCurrentStack = finalImage
      filter.inputNewImage = image
      filter.inputStackCount = Double(i + 1)
      finalImage = filter.outputImage()!
    }
    cleanup(image: finalImage)
  }
  
  func cleanup(image: CIImage) {
    frameBuffer = []
    alignedFrameBuffer = []
    isProcessingFrames = false
    if let completion = completion {
      DispatchQueue.main.async {
        completion(image)
      }
    }
    completion = nil
  }
}
7. ImageSaver.swift
import CoreImage

struct ImageSaver {
  var count = 0
  let url: URL
  
  init() {
    let uuid = UUID().uuidString
    let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    url = urls[0].appendingPathComponent(uuid)
    try? FileManager.default.createDirectory(at: url, withIntermediateDirectories: false, attributes: nil)
  }
  
  mutating func write(_ image: CIImage, as name: String? = nil) {
    guard let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) else {
      return
    }
    let context = CIContext()
    let lossyOption = kCGImageDestinationLossyCompressionQuality as CIImageRepresentationOption
    let imgURL: URL
    if let name = name {
      imgURL = url.appendingPathComponent("\(name).jpg")
    } else {
      imgURL = url.appendingPathComponent("\(count).jpg")
    }
    try? context.writeJPEGRepresentation(of: image,
                                         to: imgURL,
                                         colorSpace: colorSpace,
                                         options: [lossyOption: 0.9])
    count += 1
  }
}

后记

本篇主要讲述了在iOS中使用Vision和Metal进行照片堆叠,感兴趣的给个赞或者关注~~~

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,736评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,167评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,442评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,902评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,302评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,573评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,847评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,562评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,260评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,531评论 2 245
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,021评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,367评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,016评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,068评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,827评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,610评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,514评论 2 269

推荐阅读更多精彩内容