Graphic Architecture


This document describes the essential elements of Android's "system-level" graphics architecture, and how it is used by the application framework and multimedia system. The focus is on how buffers of graphical data move through the system. If you've ever wondered why SurfaceView and TextureView behave the way they do, or how Surface and EGLSurface interact, you've come to the right place.

这篇文档描述了android系统的子模块Graphic的总体架构,以及APP Framework层和多媒体系统如何使用Graphic模块的过程。这篇文章的重点在于讲述Graphic的buffer数据如何在系统内部传输的。如果你曾经对SurfaceView和TextureView工作方式表示好奇,如果你希望了解Surface和EGLSurface的交互方式,那么朋友,你来对地方了。

Some familiarity with Android devices and application development is assumed. You don't need detailed knowledge of the app framework, and very few API calls will be mentioned, but the material herein doesn't overlap much with other public documentation. The goal here is to provide a sense for the significant events involved in rendering a frame for output, so that you can make informed choices when designing an application. To achieve this, we work from the bottom up, describing how the UI classes work rather than how they can be used.

阅读这篇文章前,我们假设你已经对android设备和应用开发有了一定的了解。你不需要了解app framework层的大量知识,文中会涉及少量api,但是所涉及的材料并不会跟其它文档有很大重叠。这篇文章重点讲解在一帧的渲染过程中的重要步骤,目的在于使你在开发应用程序时做出更明智的选择。为实现这个目标,我们将自下向上的讲解相关的UI类是如何工作的,至于如何使用这些类,则不在我们的讲解范围内。

We start with an explanation of Android's graphics buffers, describe the composition and display mechanism, and then proceed to the higher-level mechanisms that supply the compositor with data.

我们从解释android Graphic buffer开始讲起,描述了buffer合成和显示的原理,然后,我们将在更高的层面讲解这些数据合成的原理。

This document is chiefly concerned with the system as it exists in Android 4.4 ("KitKat"). Earlier versions of the system worked differently, and future versions will likely be different as well. Version-specific features are called out in a few places.

这篇文章主要是基于android KK的,早期系统和后面的系统在一些细节方面会有一些不同。

At various points I will refer to source code from the AOSP sources or from Grafika. Grafika is a Google open-source project for testing; it can be found at It's more "quick hack" than solid example code, but it will suffice.

BufferQueue and gralloc

To understand how Android's graphics system works, we have to start behind the scenes. At the heart of everything graphical in Android is a class called BufferQueue. Its role is simple enough: connect something that generates buffers of graphical data (the "producer") to something that accepts the data for display or further processing (the "consumer"). The producer and consumer can live in different processes. Nearly everything that moves buffers of graphical data through the system relies on BufferQueue.

我们将从具体的场景来理解android Graphic 系统的运作。整个绘制系统的核心是一个叫做BufferQueue的类。它的作用其实很简单:将一些可以生产绘制数据(buffers of graphical data)的模块(producer)和一些将绘制数据显示出来或做进一步处理的模块(consumer)相连。生产者和消费者可以存在于不同的进程内。几乎所有和buffers of graphical data移动的过程都依赖与BufferQueue类。

The basic usage is straightforward. The producer requests a free buffer (dequeueBuffer()), specifying a set of characteristics including width, height, pixel format, and usage flags. The producer populates the buffer and returns it to the queue (queueBuffer()). Sometime later, the consumer acquires the buffer (acquireBuffer()) and makes use of the buffer contents. When the consumer is done, it returns the buffer to the queue (releaseBuffer()).

这个类基本的用法很简单。生产者申请一块空闲的buffer(dequeueBuffer()),在申请时指定宽度,高度,像素的格式,以及使用的用途的一系列参数。生产者填充缓冲区后,将它送还给队列(queueBuffer())。之后,消费者申请buffer (acquireBuffer()),然后使用对应的buffer数据。当消费者使用完成后,它将buffer返回给队列(releaseBuffer())。

Most recent Android devices support the "sync framework". This allows the system to do some nifty thing when combined with hardware components that can manipulate graphics data asynchronously. For example, a producer can submit a series of OpenGL ES drawing commands and then enqueue the output buffer before rendering completes. The buffer is accompanied by a fence that signals when the contents are ready. A second fence accompanies the buffer when it is returned to the free list, so that the consumer can release the buffer while the contents are still in use. This approach improves latency and throughput as the buffers move through the system.

最新的android设备支持一种叫做sync framework的技术。这允许系统结合硬件来实现对Graphic数据的异步操作。比如说,一个生产者可以一次性提交一系列的OpenGL ES绘制命令,然后在渲染结束前将其入队到输出缓冲区中。Buffer会被一个fence保护,当内容准备好后,发出信号。当buffer回到free列表时(有第二个fence保护),所以消费者在内容仍在使用的时候释放缓冲区。这种方法提高了buffer在系统中移动的速度和吞吐量。

Some characteristics of the queue, such as the maximum number of buffers it can hold, are determined jointly by the producer and the consumer.


The BufferQueue is responsible for allocating buffers as it needs them. Buffers are retained unless the characteristics change; for example, if the producer starts requesting buffers with a different size, the old buffers will be freed and new buffers will be allocated on demand.


The data structure is currently always created and "owned" by the consumer. In Android 4.3 only the producer side was "binderized", i.e. the producer could be in a remote process but the consumer had to live in the process where the queue was created. This evolved a bit in 4.4, moving toward a more general implementation.


Buffer contents are never copied by BufferQueue. Moving that much data around would be very inefficient. Instead, buffers are always passed by handle.

BufferQueue 永远不会拷贝Buffer的数据,因为移动如此多的数据效率将十分低下,buffers只会以句柄的方式被传递。

Gralloc HAL

The actual buffer allocations are performed through a memory allocator called "gralloc", which is implemented through a vendor-specific HAL interface (see hardware/libhardware/include/hardware/gralloc.h). The alloc()function takes the arguments you'd expect -- width, height, pixel format -- as well as a set of usage flags. Those flags merit closer attention.

事实上,缓冲区的分配是由一个叫做gralloc的内存分配模块控制的,这个模块是是由具体厂商来实现的一个HAL接口(参见 hardware/libhardware/include/hardware/gralloc.h)。使用alloc函数被传入你所需要的参数—宽度,高度,像素类型,以及用途的标志(usage flags)。

The gralloc allocator is not just another way to allocate memory on the native heap. In some situations, the allocated memory may not be cache-coherent, or could be totally inaccessible from user space. The nature of the allocation is determined by the usage flags, which include attributes like: • how often the memory will be accessed from software (CPU) • how often the memory will be accessed from hardware (GPU) • whether the memory will be used as an OpenGL ES ("GLES") texture • whether the memory will be used by a video encoder

这个gralloc allocator并不是仅仅在native heap上分配内存。在一些场景中,分配的内存很可能并非缓存一致性(所谓缓存一致性,是指保留在高速缓存中的共享资源,保持数据一致性的机制)的,或者是从用户空间不可达的。分配到的内存具有哪些特性,取决于在创建时传入的usage flags:

• 从软件层次来访问这段内存的频率(CPU)
• 从硬件层次来访问这段内存的频率(GPU)
• 这段内存是否被用来做OpenGL ES的材质(GLES)
• 这段内存是否会被拿来做视频的编码

For example, if your format specifies RGBA 8888 pixels, and you indicate the buffer will be accessed from software -- meaning your application will touch pixels directly -- then the allocator needs to create a buffer with 4 bytes per pixel in R-G-B-A order. If instead you say the buffer will only be accessed from hardware and as a GLES texture, the allocator can do anything the GLES driver wants -- BGRA ordering, non-linear "swizzled" layouts, alternative color formats, etc. Allowing the hardware to use its preferred format can improve performance. Some values cannot be combined on certain platforms. For example, the "video encoder" flag may require YUV pixels, so adding "software access" and specifying RGBA 8888 would fail. The handle returned by the gralloc allocator can be passed between processes through Binder.

比如说,如果你设置了RGBA 8888的像素格式,并且你设置了缓冲区被软件访问(这意味着你的程序可以直接修改像素的数据),那么allocator会创建一个四字节的缓冲区,其中顺序按照R-G-B-A的存储顺序来排列。。。。。设置成硬件推荐的数据格式可以提高性能。
一些数值的组合在某些特定的平台是不被允许的。比如说,video encoder对应的可能是YUV的数据格式,所以如果我们如果加入software access,并且指定数据格式为RGBA 8888就有可能失败。
gralloc allocator创建的缓冲区的句柄将通过binder在不同进程之间传输。

SurfaceFlinger and Hardware Composer

Having buffers of graphical data is wonderful, but life is even better when you get to see them on your device's screen. That's where SurfaceFlinger and the Hardware Composer HAL come in.

拥有Graphic数据的缓冲区很美妙,但是如果你能看到它们显示在屏幕上才更是让你觉得人生完美。是时候让SurfaceFlinger 和 Hardware Composer HAL登场了。

SurfaceFlinger's role is to accept buffers of data from multiple sources, composite them, and send them to the display. Once upon a time this was done with software blitting to a hardware framebuffer (e.g./dev/graphics/fb0), but those days are long gone.


*When an app comes to the foreground, the WindowManager service asks SurfaceFlinger for a drawing surface. SurfaceFlinger creates a "layer"

  • the primary component of which is a BufferQueue - for which SurfaceFlinger acts as the consumer. A Binder object for the producer side is passed through the WindowManager to the app, which can then start sending frames directly to SurfaceFlinger. (Note: The WindowManager uses the term "window" instead of "layer" for this and uses "layer" to mean something else. We're going to use the SurfaceFlinger terminology. It can be argued that SurfaceFlinger should really be called LayerFlinger.)*


For most apps, there will be three layers on screen at any time: the "status bar" at the top of the screen, the "navigation bar" at the bottom or side, and the application's UI. Some apps will have more or less, e.g. the default home app has a separate layer for the wallpaper, while a full-screen game might hide the status bar. Each layer can be updated independently. The status and navigation bars are rendered by a system process, while the app layers are rendered by the app, with no coordination between the two.

对于大多数app来说,屏幕上一般总是有三个layer:屏幕上方的status bar,屏幕下方的navigation bar(实际上很多品牌的手机并没有navigation bar,比如三星),以及应用本身的UI。一些应用的layer可能会有不同,比如home app的壁纸会有一个独立的layer,而一直全屏幕的游戏可能不会有status bar。每个layer都是独立的被更新。status和navigation bars是被系统进程渲染的,而app的layer则被app渲染,这二者之间并不会有什么协同作业。

Device displays refresh at a certain rate, typically 60 frames per second on phones and tablets. If the display contents are updated mid-refresh, "tearing" will be visible; so it's important to update the contents only between cycles. The system receives a signal from the display when it's safe to update the contents. For historical reasons we'll call this the VSYNC signal.


The refresh rate may vary over time, e.g. some mobile devices will range from 58 to 62fps depending on current conditions. For an HDMI-attached television, this could theoretically dip to 24 or 48Hz to match a video. Because we can update the screen only once per refresh cycle, submitting buffers for display at 200fps would be a waste of effort as most of the frames would never be seen. Instead of taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the display is ready for something new.


When the VSYNC signal arrives, SurfaceFlinger walks through its list of layers looking for new buffers. If it finds a new one, it acquires it; if not, it continues to use the previously-acquired buffer. SurfaceFlinger always wants to have something to display, so it will hang on to one buffer. If no buffers have ever been submitted on a layer, the layer is ignored.


Once SurfaceFlinger has collected all of the buffers for visible layers, it asks the Hardware Composer how composition should be performed.

一旦SurfaceFlinger已经收集到了所有可见layer的buffer,它将请求Hardware Composer来执行混合的操作。

Hardware Composer

The Hardware Composer HAL ("HWC") was first introduced in Android 3.0 ("Honeycomb") and has evolved steadily over the years. Its primary purpose is to determine the most efficient way to composite buffers with the available hardware. As a HAL, its implementation is device-specific and usually implemented by the display hardware OEM.

HWC是从android 3.0版本引入的,在过去的数年中逐渐变得稳定。它的作用是使用现有的硬件选择最有效的方式来合成缓冲区。做为一个HAL层接口,它的内容是由显示硬件设备厂商来具体实现的。

The value of this approach is easy to recognize when you consider "overlay planes." The purpose of overlay planes is to composite multiple buffers together, but in the display hardware rather than the GPU. For example, suppose you have a typical Android phone in portrait orientation, with the status bar on top and navigation bar at the bottom, and app content everywhere else. The contents for each layer are in separate buffers. You could handle composition by rendering the app content into a scratch buffer, then rendering the status bar over it, then rendering the navigation bar on top of that, and finally passing the scratch buffer to the display hardware. Or, you could pass all three buffers to the display hardware, and tell it to read data from different buffers for different parts of the screen. The latter approach can be significantly more efficient.

如果设想一下"overlay planes."的场景,那么这个方法的价值是显而易见的。"overlay planes"的作用是在display hardware而不是GPU中同时混合不同的buffer。打比方说,典型场景下,屏幕上方的status bar,屏幕下方的navigation bar,以及应用本身的UI。每个layer都有自己独立的buffer。你可以通过逐步绘制每个layer到缓冲区里的方式来合成,最后将缓冲区的数据传递给显示硬件设备;或者,你也可以将每个layer数据分别传给显示硬件设备,然后告知显示硬件设备从不同的缓冲区中读取数据。显然后一种方法更有效率。

*As you might expect, the capabilities of different display processors vary significantly. The number of overlays, whether layers can be rotated or blended, and restrictions on positioning and overlap can be difficult to express through an API. So, the HWC works like this:

  1. SurfaceFlinger provides the HWC with a full list of layers, and asks, "how do you want to handle this?"
  2. The HWC responds by marking each layer as "overlay" or "GLES composition."
  3. SurfaceFlinger takes care of any GLES composition, passing the output buffer to HWC, and lets HWC handle the rest.*

如你所料,不同显示处理器之间的性能有巨大的差距。很多Overlay, layer被旋转或者混合,因此一个api很难准确表达在位置和遮盖上的限制。因此,HWC模块是这样运作的:
2.HWC将每个layer标记为overlay或者GLES composition然后回复给SurfaceFlinger
3.SurfaceFlinger来处理被标记为GLES composition的layer,将处理之后的数据传输给HWC,并且让HWC模块来处理剩下的工作。

Since the decision-making code can be custom tailored by the hardware vendor, it's possible to get the best performance out of every device.


Overlay planes may be less efficient than GL composition when nothing on the screen is changing. This is particularly true when the overlay contents have transparent pixels, and overlapping layers are being blended together. In such cases, the HWC can choose to request GLES composition for some or all layers and retain the composited buffer. If SurfaceFlinger comes back again asking to composite the same set of buffers, the HWC can just continue to show the previously-composited scratch buffer. This can improve the battery life of an idle device.

当屏幕上没有任何东西变化时,Overlay planes的效率并不如GL composition的效率高。当overlay的内容中有很多透明的像素,或者重叠的layer在一起被混合时,这种差距尤其明显。在这种情况下,HWC会请求让GLES composition来处理部分或者全部的layer,并且保留混合后的buffer。如果Surfaceflinger又来请求混合相同的buffer时,HWC会直接显示之前保存的混合好的buffer。这么做将可以提升设备待机时间。

Devices shipping with Android 4.4 ("KitKat") typically support four overlay planes. Attempting to composite more layers than there are overlays will cause the system to use GLES composition for some of them; so the number of layers used by an application can have a measurable impact on power consumption and performance.

搭载了KK的android设备一般支持四条overlay planes。如果我们尝试混合更多的layer时,系统会使用GLES composition来处理其中的部分;所以一个应用使用了多少layer会影响到系统的功耗和性能。

You can see exactly what SurfaceFlinger is up to with the command adb shell dumpsys SurfaceFlinger. The output is verbose. The part most relevant to our current discussion is the HWC summary that appears near the bottom of the output:

你可以通过adb shell dumpsys SurfaceFlinger这个命令来查看Surfaceflinger具体使用了什么。这个命令的输出十分的长,其中和我们上面探讨的问题关连最深的是HWC的一段总结,这段一般在输出内容的底部:

    type    |          source crop              |           frame           name
        HWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceView
        HWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776]
        HWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBar
        HWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar
  FB TARGET | [    0.0,    0.0, 1080.0, 1920.0] | [    0,    0, 1080, 1920] HWC_FRAMEBUFFER_TARGET

This tells you what layers are on screen, whether they're being handled with overlays ("HWC") or OpenGL ES composition ("GLES"), and gives you a bunch of other facts you probably won't care about ("handle" and "hints" and "flags" and other stuff that we've trimmed out of the snippet above). The "source crop" and "frame" values will be examined more closely later on.

从这里我们可以看到那些显示在屏幕上的layer,是被overlays ("HWC")处理,还是被OpenGL ES composition ("GLES")处理,另外还有一些我们目前不太关注的别的属性("handle" and "hints" and "flags"还有别的一些属性,我们没有粘帖在上面的输出中)。我们会在后面详细解释"source crop" and "frame"这两个值的含义。

The FB_TARGET layer is where GLES composition output goes. Since all layers shown above are using overlays, FB_TARGET isn’t being used for this frame. The layer's name is indicative of its original role: On a device with/dev/graphics/fb0 and no overlays, all composition would be done with GLES, and the output would be written to the framebuffer. On recent devices there generally is no simple framebuffer, so the FB_TARGET layer is a scratch buffer. (Note: This is why screen grabbers written for old versions of Android no longer work: They're trying to read from The Framebuffer, but there is no such thing.)

FB_TARGET这个layer就是由GLES composition输出组成。这个layer上面的其余layer都是由overlay渲染而成,所以在这一帧里面,FB_TARGET并没有被使用。这个layer的名字表明了初始的角色:一个在/dev/graphics/fb0 的设备,所有的合成工作由GLES来完成,然后输入将会被写入framebuffer中。在当前的设备上并没有一个单纯的framebuffer,所有这个FB_TARGET layer 实际上是一个scratch buffer(这就是为什么在android早期版本上写的一些屏幕截图工具现在不能正常工作的原因:程序试图从framebuffer中读取数据,但是现在已经没有了framebuffer).

The overlay planes have another important role: they're the only way to display DRM content. DRM-protected buffers cannot be accessed by SurfaceFlinger or the GLES driver, which means that your video will disappear if HWC switches to GLES composition.

overlay planes有另外一个重要的作用:这是显示DRM内容的唯一方法。受保护的DRM视频的buffer是无法被Surfaceflinger或者GLES来读取的,这意味着如果你使用GLES而不是HWC的话,你的视频将无法播放。

The Need for Triple-Buffering

To avoid tearing on the display, the system needs to be double-buffered: the front buffer is displayed while the back buffer is being prepared. At VSYNC, if the back buffer is ready, you quickly switch them. This works reasonably well in a system where you're drawing directly into the framebuffer, but there's a hitch in the flow when a composition step is added. Because of the way SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble.

为了避免出现画面撕裂,系统需要双重缓冲:前台缓冲在显示时,后台缓冲则在准备中。在接收到VSYNC信号后,如果后台缓冲已经准备好,你就可以迅速切换到上面。如果你总是直接向framebuffer绘入数据,那么这种工作方式就足够了。但是,当我们加入一个合成步骤后,这样就会有问题。Because of the way

SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble. Suppose frame N is being displayed, and frame N+1 has been acquired by SurfaceFlinger for display on the next VSYNC. (Assume frame N is composited with an overlay, so we can't alter the buffer contents until the display is done with it.) When VSYNC arrives, HWC flips the buffers. While the app is starting to render frame N+2 into the buffer that used to hold frame N, SurfaceFlinger is scanning the layer list, looking for updates. SurfaceFlinger won't find any new buffers, so it prepares to show frame N+1 again after the next VSYNC. A little while later, the app finishes rendering frame N+2 and queues it for SurfaceFlinger, but it's too late. This has effectively cut our maximum frame rate in half.

假设frame N正在被显示,而frame N+1已经被Surfaceflinger获取用于下一次VSYNC发生时的显示(假设frame N使用了overylay来做渲染,所以显示处理完成之前,我们没办法修改buffer的内容)。当VSYNC信号到来时,HWC投递了缓冲区。当app开始渲染frame N+2 到Frame N用过的缓冲区内时,Surfaceflinger开始检查layer列表,查看是否有更新。此时Surfaceflinger并不会发现任何新的buffer,所以它会准备在下一个VSYNC到来时继续显示N+1帧的内容。一段时间后,app结束了N+2帧的渲染,然后将数据传给Surfaceflinger,但是此时已经为时太晚。这将导致我们最大帧率缩减为一半。

We can fix this with triple-buffering. Just before VSYNC, frame N is being displayed, frame N+1 has been composited (or scheduled for an overlay) and is ready to be displayed, and frame N+2 is queued up and ready to be acquired by SurfaceFlinger. When the screen flips, the buffers rotate through the stages with no bubble. The app has just less than a full VSYNC period (16.7ms at 60fps) to do its rendering and queue the buffer. And SurfaceFlinger / HWC has a full VSYNC period to figure out the composition before the next flip. The downside is that it takes at least two VSYNC periods for anything that the app does to appear on the screen. As the latency increases, the device feels less responsive to touch input.

三重缓冲可以解决我们的这个问题。VSYNC信号之前,帧N已经被显示,帧N+1已经合成完毕(或者计划进行overlay),等待被显示,而帧N+2已经在排队等候被Surfaceflinger获取。When the screen flips, the buffers rotate through the stages with no bubble.App有略少于一个完整VSYNC周期的时间(当帧率为60时,这个时间为16.7毫秒)去做它的渲染工作并且将buffer入队。在下一个VSYNC到来之前,Surfaceflinger/HWC有一个完整的VSYNC周期去完成合成的工作。坏消息是,app将内容显示在屏幕上,将需要花费两个VSYNC的周期。因为延迟增加了,所以设备会显得会触摸事件的响应不够灵敏。

SurfaceFlinger with BufferQueue


Figure 1. SurfaceFlinger + BufferQueue

The diagram above depicts the flow of SurfaceFlinger and BufferQueue. During frame:

上面的图表描述了SurfaceFlinger and BufferQueue的处理流程,在每一帧中: buffer fills up, then slides into BufferQueue
2.after red buffer leaves app, blue buffer slides in, replacing it buffer and systemUI
shadow-slide into HWC (showing that SurfaceFlinger still has the buffers, but now HWC has prepared them for display via overlay on the next VSYNC).
The blue buffer is referenced by both the display and the BufferQueue. The app is not allowed to render to it until the associated sync fence signals.*


On VSYNC, all of these happen at once:

1.Red buffer leaps into SurfaceFlinger, replacing green buffer
2.Green buffer leaps into Display, replacing blue buffer, and a dotted-line green twin appears in the BufferQueue
3.The blue buffer’s fence is signaled, and the blue buffer in App empties
4.Display rect changes from <blue + SystemUI> to <green + SystemUI>

3.蓝色缓冲区的fence被解除,进入到了App empties**中

The System UI process is providing the status and nav bars, which for our purposes here aren’t changing, so SurfaceFlinger keeps using the previously-acquired buffer. In practice there would be two separate buffers, one for the status bar at the top, one for the navigation bar at the bottom, and they would be sized to fit their contents. Each would arrive on its own BufferQueue.


The buffer doesn’t actually “empty”; if you submit it without drawing on it you’ll get that same blue again. The emptying is the result of clearing the buffer contents, which the app should do before it starts drawing.


We can reduce the latency by noting layer composition should not require a full VSYNC period. If composition is performed by overlays, it takes essentially zero CPU and GPU time. But we can't count on that, so we need to allow a little time. If the app starts rendering halfway between VSYNC signals, and SurfaceFlinger defers the HWC setup until a few milliseconds before the signal is due to arrive, we can cut the latency from 2 frames to perhaps 1.5. In theory you could render and composite in a single period, allowing a return to double-buffering; but getting it down that far is difficult on current devices. Minor fluctuations in rendering and composition time, and switching from overlays to GLES composition, can cause us to miss a swap deadline and repeat the previous frame.


SurfaceFlinger's buffer handling demonstrates the fence-based buffer management mentioned earlier. If we're animating at full speed, we need to have an acquired buffer for the display ("front") and an acquired buffer for the next flip ("back"). If we're showing the buffer on an overlay, the contents are being accessed directly by the display and must not be touched. But if you look at an active layer's BufferQueue state in the dumpsys SurfaceFlinger output, you'll see one acquired buffer, one queued buffer, and one free buffer. That's because, when SurfaceFlinger acquires the new "back" buffer, it releases the current "front" buffer to the queue. The "front" buffer is still in use by the display, so anything that dequeues it must wait for the fence to signal before drawing on it. So long as everybody follows the fencing rules, all of the queue-management IPC requests can happen in parallel with the display.

Surfaceflinger buffer的处理过程展示了我们前面提过的fence-based buffer的管理过程。如果画面高速的变化,我们需要申请一个缓冲区用于显示(front),同时需要申请一个缓冲区用于下一帧(back)。如果显示的buffer是被overlay使用的,那么这里面的内容是直接被显示系统读取的,因此不能被修改。但是如果你通过dumpsys SurfaceFlinger命令来check一个活动的layer的BufferQueue状态时,你会看到一个acquired buffer, 一个queued buffer, 还有一个free buffer.这是因为,当Surfaceflinger申请一个新的back buffer时,它释放了front buffer给队列。但是这个front buffer依然被display使用,所以任何想要在绘制之前dequeue这段buffer的进程,都必须等待fence signal的通知。只要每个人都遵守这套规则,所有的同步队列管理IPC请求都可以在显示系统中被并行的处理。

Virtual Displays

SurfaceFlinger supports a "primary" display, i.e. what's built into your phone or tablet, and an "external" display, such as a television connected through HDMI. It also supports a number of "virtual" displays, which make composited output available within the system. Virtual displays can be used to record the screen or send it over a network.


Virtual displays may share the same set of layers as the main display (the "layer stack") or have its own set. There is no VSYNC for a virtual display, so the VSYNC for the primary display is used to trigger composition for all displays.

虚拟电视可以有跟主显示相同的layer,也可以有它自己的layer stack。但是虚拟显示并没有VSYNC,所以主显示的VSYNC将用于触发所有显示的合成工作。

In the past, virtual displays were always composited with GLES. The Hardware Composer managed composition for only the primary display. In Android 4.4, the Hardware Composer gained the ability to participate in virtual display composition.


As you might expect, the frames generated for a virtual display are written to a BufferQueue.


Surface and SurfaceHolder

The Surface class has been part of the public API since 1.0. Its description simply says, "Handle onto a raw buffer that is being managed by the screen compositor." The statement was accurate when initially written but falls well short of the mark on a modern system.

Surface类从1.0开始就是公开api的一部分。它的描述是这样的:处理被屏幕合成器管理的raw buffer。这句话在当时被写下时(1.0时代)是准确的,但是在当代的操作系统的标准下这句话已经远远落后。

The Surface represents the producer side of a buffer queue that is often (but not always!) consumed by SurfaceFlinger. When you render onto a Surface, the result ends up in a buffer that gets shipped to the consumer. A Surface is not simply a raw chunk of memory you can scribble on.

Surface代表了一个buffer queue的生产者一侧,这个buffer queue一般被(但不是总是)被Surfaceflinger来消费。当你向一个Surface渲染时,结果最终在一个缓冲区内被运送到消费者那里。一个Surface并不是一个可以任意修改的简单raw内存数据块。

The BufferQueue for a display Surface is typically configured for triple-buffering; but buffers are allocated on demand. So if the producer generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps display -- there might only be two allocated buffers in the queue. This helps minimize memory consumption. You can see a summary of the buffers associated with every layer in the dumpsys SurfaceFlinger output.

一个显示surface的bufferQueue一般被配置为三重缓冲区,但是缓冲区是按需分配的。所以如果生产者生产缓冲区足够缓慢(比如在一个刷新率60的设备上只有30的刷新率),这种情况下可能队列中只有两个被分配的缓冲区,这样可以有效的降低内存使用。通过命令dumpsys SurfaceFlinger,你可以看到每个layer关连的buffer的汇总。

Canvas Rendering

Once upon a time, all rendering was done in software, and you can still do this today. The low-level implementation is provided by the Skia graphics library. If you want to draw a rectangle, you make a library call, and it sets bytes in a buffer appropriately. To ensure that a buffer isn't updated by two clients at once, or written to while being displayed, you have to lock the buffer to access it. lockCanvas() locks the buffer and returns a Canvas to use for drawing, and unlockCanvasAndPost() unlocks the buffer and sends it to the compositor.


As time went on, and devices with general-purpose 3D engines appeared, Android reoriented itself around OpenGL ES. However, it was important to keep the old API working, for apps as well as app framework code, so an effort was made to hardware-accelerate the Canvas API. As you can see from the charts on the Hardware Acceleration page, this was a bit of a bumpy ride. Note in particular that while the Canvas provided to a View's onDraw() method may be hardware-accelerated, the Canvas obtained when an app locks a Surface directly with lockCanvas() never is.

随着时间的推移,带有通用3D加速引擎的设备出现了。Android围绕OpenGL ES做了调整。然而,保证旧的API可以运行同样重要。因此我们努力使得Canvas的API支持硬件加速。如你在Hardware Acceleration页面所能看到的图表一样,这是一段艰苦的旅程。特别要注意的是,当一个Canvas提供到一个View的onDraw方法时,它可能是硬件加速的;而你通过lockCanvas方法获取到的Canvas则绝不可能是硬件加速的。

When you lock a Surface for Canvas access, the "CPU renderer" connects to the producer side of the BufferQueue and does not disconnect until the Surface is destroyed. Most other producers (like GLES) can be disconnected and reconnected to a Surface, but the Canvas-based "CPU renderer" cannot. This means you can't draw on a surface with GLES or send it frames from a video decoder if you've ever locked it for a Canvas.

当你为了使用Canvas而锁定一个Surface的时候,"CPU renderer"连接到了BufferQueue的生产者一端,直到Surface被销毁才会断开。大多数其他的生产者(比如GLES)可以断开连接,并且重新连接到一个Surface之上;但是基于CPU渲染的Canvas不行。这意味着,一旦你为了使用一个Canvas而lock了一个Surface,你就不能使用GLES绘制这个Surface,你也不能将视频解码器生成的帧发送给它。

The first time the producer requests a buffer from a BufferQueue, it is allocated and initialized to zeroes. Initialization is necessary to avoid inadvertently sharing data between processes. When you re-use a buffer, however, the previous contents will still be present. If you repeatedly call lockCanvas() and unlockCanvasAndPost() without drawing anything, you'll cycle between previously-rendered frames.

第一次生产者从BufferQueue中请求一个buffer时,它被分配并且被初始化为空。为了避免出现进程间不经意的分享数据,初始化是必要的。因为当你重新使用一个buffer时,之前的内容可能还在那里。如果你重复调用 lockCanvas() 和unlockCanvasAndPost()函数而不绘制任何东西的话,你将会循环显示前面渲染过的帧。

The Surface lock/unlock code keeps a reference to the previously-rendered buffer. If you specify a dirty region when locking the Surface, it will copy the non-dirty pixels from the previous buffer. There's a fair chance the buffer will be handled by SurfaceFlinger or HWC; but since we need to only read from it, there's no need to wait for exclusive access.


The main non-Canvas way for an application to draw directly on a Surface is through OpenGL ES. That's described in the EGLSurface and OpenGL ES section.

一个app不通过Canvas这个方法,而直接在Surface上绘制的办法是通过OpenGL ES。我们将在EGLSurface and OpenGL ES 这一节中讲到这个问题。


Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView. The original idea was that Surface represented the raw compositor-managed buffer, while SurfaceHolder was managed by the app and kept track of higher-level information like the dimensions and format. The Java-language definition mirrors the underlying native implementation. It's arguably no longer useful to split it this way, but it has long been part of the public API.


Generally speaking, anything having to do with a View will involve a SurfaceHolder. Some other APIs, such as MediaCodec, will operate on the Surface itself. You can easily get the Surface from the SurfaceHolder, so hang on to the latter when you have it.


APIs to get and set Surface parameters, such as the size and format, are implemented through SurfaceHolder.


EGLSurface and OpenGL ES

OpenGL ES defines an API for rendering graphics. It does not define a windowing system. To allow GLES to work on a variety of platforms, it is designed to be combined with a library that knows how to create and access windows through the operating system. The library used for Android is called EGL. If you want to draw textured polygons, you use GLES calls; if you want to put your rendering on the screen, you use EGL calls.

OpenGL ES定义了一组Graphic的渲染API。它并没有定义一个窗口系统。为了让GLES可以工作在不同的平台之上,它设计了一个库,这个库知道如何在指定的操作系统上创建和使用窗口。Android上的这个库叫做EGL。如果你想绘制一个多边形,那么使用GLES的函数;如果你想要将它渲染到屏幕上,你需要使用EGL的调用。

Before you can do anything with GLES, you need to create a GL context. In EGL, this means creating an EGLContext and an EGLSurface. GLES operations apply to the current context, which is accessed through thread-local storage rather than passed around as an argument. This means you have to be careful about which thread your rendering code executes on, and which context is current on that thread.

在你使用GLES做事之前,你需要创建一个GL的上下文。具体针对EGL,这意味着创建一个EGLContext 和一个 EGLSurface。GLES的操作作用在当前的上下文之中,而上下文的访问更多的依赖本地线程的存储而不是参数的传递。这意味着你需要关注你的渲染代码执行在哪个线程之上,并且这个线程的当前上下文是什么。

The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer") or a window allocated by the operating system. EGL window surfaces are created with the eglCreateWindowSurface() call. It takes a "window object" as an argument, which on Android can be a SurfaceView, a SurfaceTexture, a SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath. When you make this call, EGL creates a new EGLSurface object, and connects it to the producer interface of the window object's BufferQueue. From that point onward, rendering to that EGLSurface results in a buffer being dequeued, rendered into, and queued for use by the consumer. (The term "window" is indicative of the expected use, but bear in mind the output might not be destined to appear on the display.)

EGLSurface可以是一个由EGL分配的离屏缓冲区("pbuffer")或者一个由操作系统分配的窗口缓冲区。EGL window Surface由eglCreateWindowSurface()函数创建。它持有一个窗口对象做为参数,在Android系统上,这个对象可能是一个SurfaceView,一个SurfaceTexture,一个SurfaceHolder,或者一个Surface---所有的这些下面都有一个BuffferQueue。当你调用这个函数时,ELG创建了一个新的EGLSurface对象,并且将它连接到一个窗口对象的BufferQueue的生产者接口上。从这一刻开始,渲染到一个EGLSurface上将导致一个buffer经历出队,渲染,入队供消费者使用 这个过程。(这里属于window被使用,但这只是一个预期,实际上,输出可能并不显示在屏幕上)

EGL does not provide lock/unlock calls. Instead, you issue drawing commands and then call eglSwapBuffers()to submit the current frame. The method name comes from the traditional swap of front and back buffers, but the actual implementation may be very different.


Only one EGLSurface can be associated with a Surface at a time -- you can have only one producer connected to a BufferQueue -- but if you destroy the EGLSurface it will disconnect from the BufferQueue and allow something else to connect.


A given thread can switch between multiple EGLSurfaces by changing what's "current." An EGLSurface must be current on only one thread at a time.


The most common mistake when thinking about EGLSurface is assuming that it is just another aspect of Surface (like SurfaceHolder). It's a related but independent concept. You can draw on an EGLSurface that isn't backed by a Surface, and you can use a Surface without EGL. EGLSurface just gives GLES a place to draw.



The public Surface class is implemented in the Java programming language. The equivalent in C/C++ is the ANativeWindow class, semi-exposed by the Android NDK. You can get the ANativeWindow from a Surface with the ANativeWindow_fromSurface() call. Just like its Java-language cousin, you can lock it, render in software, and unlock-and-post.

公共的Surface类是由Java实现的。在C/C++层对应的是ANativeWindow类,半暴漏在Android NDK中。你可以通过使用 ANativeWindow_fromSurface()从Surface中获得一个ANativeWindow。就像他的java表亲一样,你可以lock,使用软件渲染,然后unlock-and-post.

To create an EGL window surface from native code, you pass an instance of EGLNativeWindowType to eglCreateWindowSurface(). EGLNativeWindowType is just a synonym for ANativeWindow, so you can freely cast one to the other.

为了从本地代码中创建一个EGL window surface,你需要给eglCreateWindowSurface()方法传递一个EGLNativeWindowType实例。EGLNativeWindowType等同于ANativeWindow,所以你可以在二者之间自由的转换。

The fact that the basic "native window" type just wraps the producer side of a BufferQueue should not come as a surprise.

事实上,native window的本质不过是BufferQueue在生产者一侧的包装罢了。

SurfaceView and GLSurfaceView

Now that we've explored the lower-level components, it's time to see how they fit into the higher-level components that apps are built from.


The Android app framework UI is based on a hierarchy of objects that start with View. Most of the details don't matter for this discussion, but it's helpful to understand that UI elements go through a complicated measurement and layout process that fits them into a rectangular area. All visible View objects are rendered to a SurfaceFlinger-created Surface that was set up by the WindowManager when the app was brought to the foreground. The layout and rendering is performed on the app's UI thread.

一个Android的app Framework层的UI是从视图上而来的基于对象的层次结构。大多数的细节对我们的讨论来说无关紧要,但是理解一下过程依然是对我们有帮助的:即UI元素是如何通过负责的测量和布局过程来将他们部署在一个矩形区域里面的。所有可见的view对象都呈现给了Surfaceflinger—当app由后台转到前台后,通过WindowManager创建了Surface。Layout和渲染都是在app的UI线程里面执行的。

Regardless of how many Layouts and Views you have, everything gets rendered into a single buffer. This is true whether or not the Views are hardware-accelerated.


A SurfaceView takes the same sorts of parameters as other views, so you can give it a position and size, and fit other elements around it. When it comes time to render, however, the contents are completely transparent. The View part of a SurfaceView is just a see-through placeholder.


When the SurfaceView's View component is about to become visible, the framework asks the WindowManager to ask SurfaceFlinger to create a new Surface. (This doesn't happen synchronously, which is why you should provide a callback that notifies you when the Surface creation finishes.) By default, the new Surface is placed behind the app UI Surface, but the default "Z-ordering" can be overridden to put the Surface on top.

当SurfaceView的组件即将变为可见时,Framework层要求WindowManager请求Surfaceflinger创建一个新的Surface(这个过程是异步发生的,这就是为什么你应该提供一个回调函数,这样当Surface创建完成时你才能得到通知)。缺省情况下,新创建的Surface在app UI Surface的下面,但是Z轴顺序可能将这个Surface放在上面。

Whatever you render onto this Surface will be composited by SurfaceFlinger, not by the app. This is the real power of SurfaceView: the Surface you get can be rendered by a separate thread or a separate process, isolated from any rendering performed by the app UI, and the buffers go directly to SurfaceFlinger. You can't totally ignore the UI thread -- you still have to coordinate with the Activity lifecycle, and you may need to adjust something if the size or position of the View changes -- but you have a whole Surface all to yourself, and blending with the app UI and other layers is handled by the Hardware Composer.

渲染在这个Surface(SurfaceView的surface)上的内容将由Surfaceflinger来混合,而不是由app。这才是SurfaceView的真正作用:这个surface可以被一个独立的线程或者进程来渲染,和app UI上其他的渲染工作分开,这些缓冲区数据将直接传递给Surfaceflinger。当然你不能完全忽略UI线程---你依然要和activity的生命周期保持一致,并且一旦view的大小或者位置发生了改变,你可能也需要做些调整—但是你拥有了一个完整的Surface,并且这个Surface和app UI以及其他layer的混合工作将由HWC来完成。

It's worth taking a moment to note that this new Surface is the producer side of a BufferQueue whose consumer is a SurfaceFlinger layer. You can update the Surface with any mechanism that can feed a BufferQueue. You can: use the Surface-supplied Canvas functions, attach an EGLSurface and draw on it with GLES, and configure a MediaCodec video decoder to write to it.

值得一提的是,新创建的surface实际上是生产者端,而消费者端则是一个Surfaceflinger的layer。你可以通过任何可以填充BufferQueue的途径来更新这个surface。你可以:使用Surface提供的Canvas相关的函数,附加一个EGLSurface然后使用GLES在上面绘制,配置一个MediaCodec 视频解码器直接在上面写数据。

Composition and the Hardware Scaler Now that we have a bit more context, it's useful to go back and look at a couple of fields from dumpsys SurfaceFlinger that we skipped over earlier on. Back in the Hardware Composer discussion, we looked at some output like this:

我们现在有了更多的上下文知识,所以让我们回去看看前面讲到dumpsys SurfaceFlinger时我们忽略的几个字段。我们来看看如下的几个输出:

    type    |          source crop              |           frame           name
        HWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceView
        HWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776]
        HWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBar
        HWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar
  FB TARGET | [    0.0,    0.0, 1080.0, 1920.0] | [    0,    0, 1080, 1920] HWC_FRAMEBUFFER_TARGET

This was taken while playing a movie in Grafika's "Play video (SurfaceView)" activity, on a Nexus 5 in portrait orientation. Note that the list is ordered from back to front: the SurfaceView's Surface is in the back, the app UI layer sits on top of that, followed by the status and navigation bars that are above everything else. The video is QVGA (320x240).

这个抓取自Nexus 5竖屏模式,当在Grafika里面播放视频的时候。注意这列表是按照从后到前的顺序排列的:SurfaceView的Surface在最后面,app ui layer在上面,然后是状态栏和导航栏。视频是QVGA的。

The "source crop" indicates the portion of the Surface's buffer that SurfaceFlinger is going to display. The app UI was given a Surface equal to the full size of the display (1080x1920), but there's no point rendering and compositing pixels that will be obscured by the status and navigation bars, so the source is cropped to a rectangle that starts 75 pixels from the top, and ends 144 pixels from the bottom. The status and navigation bars have smaller Surfaces, and the source crop describes a rectangle that begins at the the top left (0,0) and spans their content.

"source crop"指示了Surface的buffer中要被SurfaceFlinger显示的部分。App UI的surface大小是整个显示的大小(1080*1920),但是由于需要显示状态栏和导航栏,因此从上面裁剪了75个像素,从下面裁剪了144个像素。

The "frame" is the rectangle where the pixels end up on the display. For the app UI layer, the frame matches the source crop, because we're copying (or overlaying) a portion of a display-sized layer to the same location in another display-sized layer. For the status and navigation bars, the size of the frame rectangle is the same, but the position is adjusted so that the navigation bar appears at the bottom of the screen.

Frame一栏是指最终显示在屏幕上的位置。APP UI因为是完全相同位置的拷贝,因此这个值和前一列完全相同。而对于状态栏和导航栏,大小和前面一列是相似的,但是位置已经发生了改变。

Now consider the layer labeled "SurfaceView", which holds our video content. The source crop matches the video size, which SurfaceFlinger knows because the MediaCodec decoder (the buffer producer) is dequeuing buffers that size. The frame rectangle has a completely different size -- 984x738.

现在我们来看下SurfaceView这个layer,这个layer里面是视频的内容。source crop一列和视频的大小一致,SurfaceFlinger之所以知道这个信息是因为MediaCodec解码器申请的出队的buffer的大小就是这么大。而Frame则有一个完全不同的大小:984*738.

SurfaceFlinger handles size differences by scaling the buffer contents to fill the frame rectangle, upscaling or downscaling as needed. This particular size was chosen because it has the same aspect ratio as the video (4:3), and is as wide as possible given the constraints of the View layout (which includes some padding at the edges of the screen for aesthetic reasons).

SurfaceFlinger处理了这种大小的不同,通过缩放缓冲区数据的方式来填充到frame中,根据需要放大或者缩小。之所以选择这个特殊的大小,是因为这个大小和视频有相同的高宽比,并且在View Layout允许的宽度下尽可能的大(基于美观的考虑,在屏幕边缘也放置了一些填充物。)。

If you started playing a different video on the same Surface, the underlying BufferQueue would reallocate buffers to the new size automatically, and SurfaceFlinger would adjust the source crop. If the aspect ratio of the new video is different, the app would need to force a re-layout of the View to match it, which causes the WindowManager to tell SurfaceFlinger to update the frame rectangle.

如果我们在同一个surface上面播放了一个不同大小的视频,那么底层的BufferQueue会重新使用新的大小重新分配缓冲区,SurfaceFlinger也会调整它的source crop。如果新的视频的高宽比也发生了变化,app需要强制要求View来re-layout来匹配当前大小,这将导致WindowManager通知SurfaceFlinger来更新每个frame矩形的大小。

If you're rendering on the Surface through some other means, perhaps GLES, you can set the Surface size using the SurfaceHolder#setFixedSize() call. You could, for example, configure a game to always render at 1280x720, which would significantly reduce the number of pixels that must be touched to fill the screen on a 2560x1440 tablet or 4K television. The display processor handles the scaling. If you don't want to letter- or pillar-box your game, you could adjust the game's aspect ratio by setting the size so that the narrow dimension is 720 pixels, but the long dimension is set to maintain the aspect ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display). You can see an example of this approach in Grafika's "Hardware scaler exerciser" activity.

如果你使用这个Surface来做其他用途,比如GLES,那么你可以通过调用SurfaceHolder#setFixedSize() 函数来设置Surface的大小。比如你可以设置一个游戏的大小为1280*720,这样当你去运行在一个大小为2K或者4K的屏幕上时,你可以显著的减少需要填充的像素的数目。显示处理器会来处理缩放。。。。


The GLSurfaceView class provides some helper classes that help manage EGL contexts, inter-thread communication, and interaction with the Activity lifecycle. That's it. You do not need to use a GLSurfaceView to use GLES.


For example, GLSurfaceView creates a thread for rendering and configures an EGL context there. The state is cleaned up automatically when the activity pauses. Most apps won't need to know anything about EGL to use GLES with GLSurfaceView.

举例来说,GLSurfaceView创建了一个用来渲染和管理EGL上下文的线程。当activity pause时,它会自动清空所有的状态。大多数应用通过GLSurfaceView来使用GLES时,不需要了解任何和EGL有关的事情。

In most cases, GLSurfaceView is very helpful and can make working with GLES easier. In some situations, it can get in the way. Use it if it helps, don't if it doesn't.



The SurfaceTexture class is a relative newcomer, added in Android 3.0 ("Honeycomb"). Just as SurfaceView is the combination of a Surface and a View, SurfaceTexture is the combination of a Surface and a GLES texture. Sort of.

SurfaceTexture类是从Android 3.0开始引入的。就像是SurfaceView是一个Surface和view的组合一样,SurfaceTexture某种程度上是一个Surface和GLES材质的组合。

When you create a SurfaceTexture, you are creating a BufferQueue for which your app is the consumer. When a new buffer is queued by the producer, your app is notified via callback (onFrameAvailable()). Your app calls updateTexImage(), which releases the previously-held buffer, acquires the new buffer from the queue, and makes some EGL calls to make the buffer available to GLES as an "external" texture.


External textures (GL_TEXTURE_EXTERNAL_OES) are not quite the same as textures created by GLES (GL_TEXTURE_2D). You have to configure your renderer a bit differently, and there are things you can't do with them. But the key point is this: You can render textured polygons directly from the data received by your BufferQueue.


You may be wondering how we can guarantee the format of the data in the buffer is something GLES can recognize -- gralloc supports a wide variety of formats. When SurfaceTexture created the BufferQueue, it set the consumer's usage flags to GRALLOC_USAGE_HW_TEXTURE, ensuring that any buffer created by gralloc would be usable by GLES.

你可以好奇,我们如何保证buffer中的数据格式可以被GLES正确读取---要知道gralloc可是支持各种各样的格式。当SurfaceTexture创建BufferQueue时,它设置消费者的usage flags是GRALLOC_USAGE_HW_TEXTURE,这保证了任何由gralloc创建的buffer都是可以被GLES使用的。

Because SurfaceTexture interacts with an EGL context, you have to be careful to call its methods from the correct thread. This is spelled out in the class documentation.

因为SurfaceTexture要和EGL上下文交互,因此务必保证调用的方法来自正确的线程,这个在类的说明文档中已经指出(SurfaceTexture objects may be created on any thread. updateTexImage() may only be called on the thread with the OpenGL ES context that contains the texture object. The frame-available callback is called on an arbitrary thread, so unless special care is taken updateTexImage() should not be called directly from the callback.)。

If you look deeper into the class documentation, you will see a couple of odd calls. One retrieves a timestamp, the other a transformation matrix, the value of each having been set by the previous call to updateTexImage(). It turns out that BufferQueue passes more than just a buffer handle to the consumer. Each buffer is accompanied by a timestamp and transformation parameters.


The transformation is provided for efficiency. In some cases, the source data might be in the "wrong" orientation for the consumer; but instead of rotating the data before sending it, we can send the data in its current orientation with a transform that corrects it. The transformation matrix can be merged with other transformations at the point the data is used, minimizing overhead.

Transformation参数用于提高效率。在一些情况下,提供给消费者的源数据可能方向是错误的,但是相比发送之前旋转数据,我们可以发送一个当前方向的数据,同时一起发送一个transform参数用以变换。transformation matrix(变换矩阵)可以跟其他一些变换参数一起传递,用以减小系统开销。

The timestamp is useful for certain buffer sources. For example, suppose you connect the producer interface to the output of the camera (with setPreviewTexture()). If you want to create a video, you need to set the presentation time stamp for each frame; but you want to base that on the time when the frame was captured, not the time when the buffer was received by your app. The timestamp provided with the buffer is set by the camera code, resulting in a more consistent series of timestamps.


SurfaceTexture and Surface

If you look closely at the API you'll see the only way for an application to create a plain Surface is through a constructor that takes a SurfaceTexture as the sole argument. (Prior to API 11, there was no public constructor for Surface at all.) This might seem a bit backward if you view SurfaceTexture as a combination of a Surface and a texture.


Under the hood, SurfaceTexture is called GLConsumer, which more accurately reflects its role as the owner and consumer of a BufferQueue. When you create a Surface from a SurfaceTexture, what you're doing is creating an object that represents the producer side of the SurfaceTexture's BufferQueue.



Figure 2.Grafika's continuous capture activity

In the diagram above, the arrows show the propagation of the data from the camera. BufferQueues are in color (purple producer, cyan consumer). Note “Camera” actually lives in the mediaserver process.

Encoded H.264 video goes to a circular buffer in RAM in the app process, and is written to an MP4 file on disk using the MediaMuxer class when the “capture” button is hit.
如上图所示,箭头显示了从相机开始数据的传输。图中紫色的是生产者,而蓝色的则是消费者。注意Camera实际上是在mediaserver进程中的。编码过的H.264 video存入app进程的一个环形缓冲区内,并且通过MediaMuxer类被写入一个硬盘上的MP4文件中(?)。

All three of the BufferQueues are handled with a single EGL context in the app, and the GLES operations are performed on the UI thread. Doing the SurfaceView rendering on the UI thread is generally discouraged, but since we're doing simple operations that are handled asynchronously by the GLES driver we should be fine. (If the video encoder locks up and we block trying to dequeue a buffer, the app will become unresponsive. But at that point, we're probably failing anyway.) The handling of the encoded data -- managing the circular buffer and writing it to disk -- is performed on a separate thread.


The bulk of the configuration happens in the SurfaceView's surfaceCreated() callback. The EGLContext is created, and EGLSurfaces are created for the display and for the video encoder. When a new frame arrives, we tell SurfaceTexture to acquire it and make it available as a GLES texture, then render it with GLES commands on each EGLSurface (forwarding the transform and timestamp from SurfaceTexture). The encoder thread pulls the encoded output from MediaCodec and stashes it in memory.

大量的配置工作发生在SurfaceView的 surfaceCreated()函数被调用时。EGLContext被创建,用于显示和视频编码器的EGLSurfaces也同样被创建。当新的一帧来临时,我们通知SurfaceTexture来获取并且使之变为GLES texture,然后使用GLES命令来在各个EGLSurface上渲染(从SurfaceTexture来发送transform 和 timestamp)。解码器线程从MediaCodec取出数据并且把它存放在内存里。


The TextureView class was introduced in Android 4.0 ("Ice Cream Sandwich"). It's the most complex of the View objects discussed here, combining a View with a SurfaceTexture.

TextureView类是由Android 4.0引入的。它是我们目前介绍过的最复杂的类对象,由一个View混合了一个SurfaceTexture而成。

Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics data and making them available as textures. TextureView wraps a SurfaceTexture, taking over the responsibility of responding to the callbacks and acquiring new buffers. The arrival of new buffers causes TextureView to issue a View invalidate request. When asked to draw, the TextureView uses the contents of the most recently received buffer as its data source, rendering wherever and however the View state indicates it should.

让我们回忆一下,SurfaceTexture实际上是一个‘GL consumer’,消费Graphic缓冲区的数据,并且使他们可以作为纹理。TextureView包装了一个SurfaceTexture,接管了它响应回调函数和申请新的缓冲区的工作。新的缓冲区的到来使得TextureView发出一个view重绘的请求。当被要求绘制时,TextureView使用了最新收到的缓冲区数据作为数据来源,渲染任何view状态指示它做的工作。

You can render on a TextureView with GLES just as you would SurfaceView. Just pass the SurfaceTexture to the EGL window creation call. However, doing so exposes a potential problem.

你可以在TextureView上使用GLES渲染,就像你在SurfaceView上做的一样。只需要在EGL window创建时,把SurfaceTexture传递过去。但是,这样做会暴露一个潜在的问题。

In most of what we've looked at, the BufferQueues have passed buffers between different processes. When rendering to a TextureView with GLES, both producer and consumer are in the same process, and they might even be handled on a single thread. Suppose we submit several buffers in quick succession from the UI thread. The EGL buffer swap call will need to dequeue a buffer from the BufferQueue, and it will stall until one is available. There won't be any available until the consumer acquires one for rendering, but that also happens on the UI thread… so we're stuck.


The solution is to have BufferQueue ensure there is always a buffer available to be dequeued, so the buffer swap never stalls. One way to guarantee this is to have BufferQueue discard the contents of the previously-queued buffer when a new buffer is queued, and to place restrictions on minimum buffer counts and maximum acquired buffer counts. (If your queue has three buffers, and all three buffers are acquired by the consumer, then there's nothing to dequeue and the buffer swap call must hang or fail. So we need to prevent the consumer from acquiring more than two buffers at once.) Dropping buffers is usually undesirable, so it's only enabled in specific situations, such as when the producer and consumer are in the same process.



We hope this page has provided useful insights into the way Android handles graphics at the system level.


  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 119,397评论 1 241
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 52,183评论 1 200
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 74,775评论 0 167
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 36,619评论 0 127
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 43,458评论 1 206
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 36,068评论 1 126
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 28,056评论 2 209
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 27,184评论 0 120
  • 想象着我的养父在大火中拼命挣扎,窒息,最后皮肤化为焦炭。我心中就已经是抑制不住地欢快,这就叫做以其人之道,还治其人...
    爱写小说的胖达阅读 26,124评论 5 173
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 30,241评论 0 178
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 27,450评论 1 170
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 28,689评论 1 178
  • 白月光回国,霸总把我这个替身辞退。还一脸阴沉的警告我。[不要出现在思思面前, 不然我有一百种方法让你生不如死。]我...
    爱写小说的胖达阅读 22,990评论 0 25
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 25,606评论 2 164
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 29,463评论 3 173
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 24,454评论 0 4
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 24,469评论 0 113
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 30,687评论 2 189
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 31,094评论 2 188


  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 8,151评论 0 23
  • 按:又到一年考试季!怎一个“考”字了得?考,一定会触动你的心灵,可以写自己记忆中的深刻经历,可以写自己独特的感悟,...
    简约语文阅读 1,412评论 0 6
  • 总是 一个人守着一座城, 这世上也总是一个人在赴另一个人的约。 而尽头的另一边,总会有人辜负。 总会有人半途而废,...
    癫狂撒哈拉阅读 540评论 0 2
  • [cp]#哑舍##赤锁# 医生死亡结局 不是他,不是他,不是他,都不是他。 可你要知道,要多少人的努力,多少代人的...
    Nightingale星子阅读 1,890评论 0 1
  • 作者: 凉白唐 1 min read 3人收录 愿你是一棵树 有一片属于自己的泥土 清晨阳光普照,暮夜皎月明亮...
    凉白唐阅读 1,186评论 2 3