Android 绘图机制

2,559 阅读17分钟

一:前言

一直对Android绘图机制比较感兴趣,虽然从书和博客中对SurfaceFlinger稍微有些了解,但是没有往下去深究。刚好最近项目中设计openGL相关的需求,于是便研究了下Android的绘图机制(基于7.0源码)。

二:App的画板Surface

回想一下如果我们小时候画画的时候,老师总会给我们发一张纸,让我们画在上面,同样App如果想要绘制图形,也总得给它一个画板吧,在Android中,这块画板就是Surface。

那么问题来了,这块画板是何时给我们的呢?换句话说,App的Surface是何时创建的呢?先想想,Android开发最基本的知识就是四大组件,其中用于UI展示的就是Activity,而Activity中UI的根本就在于View,而一个Activity中的所有View结构为一棵View树,既然为树,则必有根,Activity中View的根就为ViewRootImpl。

ViewRootImpl中有个mSurface变量,该变量就承担者java层Surface的角色,注意这里特地说了一下是java层,因为上层的Surface只是个傀儡,真正的Surface是在C++中的,java层的Surface只是保存了一个C++层中Surface的指针。那么java层的Surface是如何和C++层的关联起来的呢?

ViewRootImpl的requestWindow()时,会调用mWindowSession.relayout(),并传入自己的mSurface,该方法中调用了WindowManagerService.relayoutWindow(),最终会调用到WindowManagerService.createSurfaceControl():

    private int createSurfaceControl(Surface outSurface, int result, WindowState win,WindowStateAnimator winAnimator) {
        ......
        WindowSurfaceController surfaceController = winAnimator.createSurfaceLocked();
        if (surfaceController != null) {
            surfaceController.getSurface(outSurface);
        } else 
            outSurface.release();
        }
        return result;
    }

       

这里首先创建了一个WindowSurfaceController,接着调用了WindowSurfaceController.getSurface(outSurface):

    void getSurface(Surface outSurface) {
        outSurface.copyFrom(mSurfaceControl);
    }

这里的outSurface即为ViewRootImpl中的mSurface,重点就在Surface.copyFrom()中:

    public void copyFrom(SurfaceControl other) {
        ......
        long surfaceControlPtr = other.mNativeObject;
        ......
        long newNativeObject = nativeCreateFromSurfaceControl(surfaceControlPtr);
        synchronized (mLock) {
            ......
            setNativeObjectLocked(newNativeObject);
        }
    }

这里调用了nativeCreateFromSurfaceControl(surfaceControlPtr)来获取C++层Surface的指针:

//frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeCreateFromSurfaceControl(JNIEnv* env, jclass clazz,
        jlong surfaceControlNativeObj) {
    sp<SurfaceControl> ctrl(reinterpret_cast<SurfaceControl *>(surfaceControlNativeObj));
    sp<Surface> surface(ctrl->getSurface());
    if (surface != NULL) {
        surface->incStrong(&sRefBaseOwner);
    }
    return reinterpret_cast<jlong>(surface.get());
}

可见C++层调用了SurfaceControl.getSurface()来获取Surface对象:

//frmeworks/native/libs/gui/SurfaceControl.cpp
sp<Surface> SurfaceControl::getSurface() const
{
    Mutex::Autolock _l(mLock);
    if (mSurfaceData == 0) {
        mSurfaceData = new Surface(mGraphicBufferProducer, false);
    }
    return mSurfaceData;
}

到这里,Activity的Surface来源终于有了着落,我们终于可以向这个画板绘制数据了。

三:Surface与缓冲区

上面我们将App绘图类比为向画板中画画,但是与画画不同的是,我们的绘制UI其实是写入UI的像素数据,这些数据最终将被渲染到屏幕上,屏幕上的像素点显示对应坐标的像素值。那么问题来了,我们怎么向屏幕中写入数据呢?别忘了,Android是基于Linux的,Linux系统有个Framebuffer,我们只要向这个FrameBuffer中写入数据,系统就会在一个合适的时机从Framebuffer中取出数据,渲染到屏幕上。那么我们是不是只要将app的UI数据写入到这个FrameBuffer就可以了呢?

事实上,这是不可取的,因为Android系统里安装了很多app,可是屏幕只有一个,如果大家同时都往屏幕上直接写数据,岂不是乱套了,所以Android就搞了一个SurfaceFlinger,专门来负责这件事。屏幕相当于学校里的的画廊,SurfaceFlinger就是负责画廊的老师,它给每个学生发了一张纸,告诉大家将自己的画画在自己的纸上,最后再交给老师(SurfaceFlinger),老师再将大家的画整理一下,按照一定顺序贴到画廊(屏幕)上。

画画的例子告诉我们,老师会给每个学生都发一张纸,所以SurfaceFlinger也会给每个Activity分配一个缓冲区来绘制UI,这些缓冲区叫图形缓冲区,而上面的Framebuffer叫帧缓冲区。但是事实上,学校里有很多班(APP),如果负责画廊的老师(SurfaceFlinger)一个个去给每个班的学生(Activity)发画纸(缓冲区),岂不要累死,所以每个班(APP)又会有个班主任来负责发画纸,收画纸这件事,而这个班主任就是BufferQueue。

BufferQueue是APP中所有缓冲区的管理者,内部分配了一个包含64个BufferSlot的数组:

//frameworks/native/include/gui/BufferQueueDefs.h
enum { NUM_BUFFER_SLOTS = 64 };
typedef BufferSlot SlotsType[NUM_BUFFER_SLOTS]

BufferSlot内部有个智能指针,指向一个缓冲区GraphicBuffer。也就是说,一个App最多可以有64个图形缓冲区。

BufferQueue采用了生产者-消费者的模式:其中生产者为BufferQueueProducer,通过调用dequeBuffer()获取到一个空闲的缓冲区,并填入要绘制的图形数据,接着调用queueBuffer()将Buffer重新返回给BufferQueue。而消费者则为BufferQueueConsumer,通过调用acquireBuffer()从BufferQueue中拿到一个被填满的缓冲区并消费。

Surface并不关心消费者,因为它应该不管这件事,它的责任是拿到空闲的缓冲区,并绘制自己的UI数据,所以它需要的是空闲缓冲区的生产者,及BufferQueueProducer。那么Surface是何时拿到这个BufferQueueProducer的呢?回想一下我们上面讲的C++层Surface的创建:

//frmeworks/native/libs/gui/SurfaceControl.cpp
mSurfaceData = new Surface(mGraphicBufferProducer, false);

原来Surface创建的时候,就已经将一个生产者传给它了。问题是:这里的mGraphicBufferProducer是从哪来的呢?Surface的创建在SurfaceControl中,这个mGraphicBufferProducer即为SurfaceControl中的一个智能指针,SurfaceControl也是在自己的构造函数中给该指针赋引用对象的:

//frmeworks/native/libs/gui/SurfaceControl.cpp
SurfaceControl::SurfaceControl(
        const sp<SurfaceComposerClient>& client,
        const sp<IBinder>& handle,
        const sp<IGraphicBufferProducer>& gbp)
    : mClient(client), mHandle(handle), mGraphicBufferProducer(gbp)
{
}

现在的问题就变成了这个SurfaceControl是何时被创建的了。记得java层在创建Surface之前先创建了一个WindowSurfaceController吗?而WindowSurfaceController在其构造函数中会创建一个SurfaceControl对象,SurfaceControl创建时,会同样创建一个C++层的SurfaceControl对象:

mNativeObject = nativeCreate(session, name, w, h, format, flags);

让我们来看看这个native方法里干了什么:

//frmeworks/native/libs/gui/SurfaceControl.cpp
static jlong nativeCreate(JNIEnv* env, jclass clazz, jobject sessionObj,
        jstring nameStr, jint w, jint h, jint format, jint flags) {
    ScopedUtfChars name(env, nameStr);
    sp<SurfaceComposerClient> client(android_view_SurfaceSession_getClient(env, sessionObj));
    sp<SurfaceControl> surface = client->createSurface(
            String8(name.c_str()), w, h, format, flags);
    surface->incStrong((void *)nativeCreate);
    return reinterpret_cast<jlong>(surface.get());
}

C++层的SurfaceControl对象是由通过一个SurfaceComposerClient对象的方法createSurface()创建的:

//frameworks/native/libs/gui/SurfaceComposerClient.cpp
sp<SurfaceControl> SurfaceComposerClient::createSurface(
        const String8& name,
        uint32_t w,
        uint32_t h,
        PixelFormat format,
        uint32_t flags)
{
    sp<SurfaceControl> sur;
    if (mStatus == NO_ERROR) {
        sp<IBinder> handle;
        sp<IGraphicBufferProducer> gbp;
        status_t err = mClient->createSurface(name, w, h, format, flags,
                &handle, &gbp);
        ALOGE_IF(err, "SurfaceComposerClient::createSurface error %s", strerror(-err));
        if (err == NO_ERROR) {
            sur = new SurfaceControl(this, handle, gbp);
        }
    }
    return sur;
}

这里调用了mClient->createSurface(name, w, h, format, flags, &handle, &gbp),mClient是SurfaceComposerClient中的一个智能指针变量,但是它又指向什么呢?

SurfaceComposerClient继承了RefBase,所以在它第一次被强引用时,会触发它的onFirstRef()方法:

//frameworks/native/libs/gui/SurfaceComposerClient.cpp
void SurfaceComposerClient::onFirstRef() {
    sp<ISurfaceComposer> sm(ComposerService::getComposerService());
    if (sm != 0) {
        sp<ISurfaceComposerClient> conn = sm->createConnection();
        if (conn != 0) {
            mClient = conn;
            mStatus = NO_ERROR;
        }
    }
}

可见这里的mClient指向了sm->createConnection()返回的结果,而sm指向了ComposerService::getComposerService():

//frameworks/native/libs/gui/SurfaceComposerClient.cpp
sp<ISurfaceComposer> ComposerService::getComposerService() {
    ComposerService& instance = ComposerService::getInstance();
    Mutex::Autolock _l(instance.mLock);
    if (instance.mComposerService == NULL) {
        ComposerService::getInstance().connectLocked();
        assert(instance.mComposerService != NULL);
        ALOGD("ComposerService reconnected");
    }
    return instance.mComposerService;
}

这儿先调用了一下ComposerService::getInstance().connectLocked():

//frameworks/native/libs/gui/SurfaceComposerClient.cpp
void ComposerService::connectLocked() {
    const String16 name("SurfaceFlinger");
    while (getService(name, &mComposerService) != NO_ERROR) {
        usleep(250000);
    }
}

connectLocked()通过ServiceManager拿到了SurfaceFlinger的Binder代理对象,并赋给了ComposerService的变量mComposerService。所以ComposerService::getComposerService()返回的是SurfaceFlinger的在客户端进程中的代理。

再回到之前的SurfaceComposerClient::onFirstRef()中,sm.createConnection()事实上就是跨进程调用SurfaceFlinger.createConnection():

//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
sp<ISurfaceComposerClient> SurfaceFlinger::createConnection()
{
    sp<ISurfaceComposerClient> bclient;
    sp<Client> client(new Client(this));
    status_t err = client->initCheck();
    if (err == NO_ERROR) {
        bclient = client;
    }
    return bclient;
}

这里返回了一个Client对象,它是SurfaceFlinger的一个帮手,SurfaceComposerClient::onFirstRef()拿到这个Client对象后赋给了变量mClient,所以上面mClient的来头我们就知道了。

再回到之前,我们讲到了mClient->createSurface(name, w, h, format, flags, &handle, &gbp):

//frameworks/native/services/surfaceflinger/Client.cpp
status_t Client::createSurface(
        const String8& name,
        uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
        sp<IBinder>* handle,
        sp<IGraphicBufferProducer>* gbp)
{
    /*
     * createSurface must be called from the GL thread so that it can
     * have access to the GL context.
     */

    class MessageCreateLayer : public MessageBase {
        SurfaceFlinger* flinger;
        Client* client;
        sp<IBinder>* handle;
        sp<IGraphicBufferProducer>* gbp;
        status_t result;
        const String8& name;
        uint32_t w, h;
        PixelFormat format;
        uint32_t flags;
    public:
        MessageCreateLayer(SurfaceFlinger* flinger,
                const String8& name, Client* client,
                uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
                sp<IBinder>* handle,
                sp<IGraphicBufferProducer>* gbp)
            : flinger(flinger), client(client),
              handle(handle), gbp(gbp), result(NO_ERROR),
              name(name), w(w), h(h), format(format), flags(flags) {
        }
        status_t getResult() const { return result; }
        virtual bool handler() {
            result = flinger->createLayer(name, client, w, h, format, flags,
                    handle, gbp);
            return true;
        }
    };

    sp<MessageBase> msg = new MessageCreateLayer(mFlinger.get(),
            name, this, w, h, format, flags, handle, gbp);
    mFlinger->postMessageSync(msg);
    return static_cast<MessageCreateLayer*>( msg.get() )->getResult();
}

这里调用了mFlinger.postMessageSync(msg)方法,通知SurfaceFlinger去处理,重点在于MessageCreateLayer的handler()中,调用了SurfaceFlinger.createLayer(),这里最终会掉用到SurfaceFlinger.createNormalLayer(),其实现在SurfaceFlinger_hwc1.cpp中:

status_t SurfaceFlinger::createNormalLayer(const sp<Client>& client,
        const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,
        sp<IBinder>* handle, sp<IGraphicBufferProducer>* gbp, sp<Layer>* outLayer)
{
    ......
    *outLayer = new Layer(this, client, name, w, h, flags);
    status_t err = (*outLayer)->setBuffers(w, h, format, flags);
    if (err == NO_ERROR) {
        *handle = (*outLayer)->getHandle();
        *gbp = (*outLayer)->getProducer();
    }

    return err;
}

这里第终于给gbp赋值了,看到这里,希望你没被绕晕,还记得这个gbp是指Surface中的mGraphicBufferProducer。

Layer.getProducer()直接返回了其中的mProducer变量,类型为MonitoredProducer,但是上面我们提到,BufferQueue的生产者为BufferQueueProducer,怎么变成了MonitoredProducer呢?别急,接着往下看:

在Layer的onFirstRef()被触发时,会创建MonitoredProducer:

//frameworks/native/services/surfaceflinger/Layer.cpp
void Layer::onFirstRef() {
    // Creates a custom BufferQueue for SurfaceFlingerConsumer to use
    sp<IGraphicBufferProducer> producer;
    sp<IGraphicBufferConsumer> consumer;
    BufferQueue::createBufferQueue(&producer, &consumer);
    mProducer = new MonitoredProducer(producer, mFlinger);
    mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(consumer, mTextureName);
    ......
}

这里BufferQueue.createBufferQueue()创建了一个我们熟悉的生产者BufferQueueProducer,并传给了MonitoredProducer(),所以MonitoredProducer其实只是一个代理,真正的生产者还是BufferQueueProducer。

四:绘图过程

Surface拿到mGraphicBufferProducer后,就可以从BufferQueue中拿到一个空闲的缓冲区,从而往里面写入自己的UI数据了,至于怎么将UI转为字节数组,可以通过opengl或者Skia库,我们可以简单的理解为,当我在上层调用底层画了一个矩形,opengl或者skia其实是会将这个矩形转化为组成它的每个像素点的颜色值,这些颜色值的组合不就是字节数组了吗?那么设备之后就可以根据这个字节数组渲染自己屏幕上的每个像素点了。

Android里View在自己的绘制函数draw()中,最终都是通过Canvas来画的,比如说我们画个矩形,都是通过调用Canvas.drawRect(), drawRect()只是调用了native_drawRect():

//frameworks/base/core/jni/android_graphics_Canvas.cpp
static void drawRect(JNIEnv* env, jobject, jlong canvasHandle, jfloat left, jfloat top,
                     jfloat right, jfloat bottom, jlong paintHandle) {
    const Paint* paint = reinterpret_cast<Paint*>(paintHandle);
    get_canvas(canvasHandle)->drawRect(left, top, right, bottom, *paint);
}

这里调用了SkiaCanvas::drawRect():

//frameworks/base/libs/hwui/SkiaCanvas.cpp
void SkiaCanvas::drawRect(float left, float top, float right, float bottom,
        const SkPaint& paint) {
    mCanvas->drawRectCoords(left, top, right, bottom, paint);

}

这里的mCanvas为SkCanvas,SkCanvas持有一个SkBitmap对象,这个对象又持有着从BufferQueue中出列的一个缓冲区,所以SkCanvas就可以往缓冲区中写入内容了。那么SkBitmap持有的缓冲区又是何时分配的呢?

用过SurfaceView和Canvas作图的肯定知道,要想拿到一个Canvas,必须通过SurfaceHolder.lockCanvas(),最终走到的是java层的Surface.lockCanvas(),我们说过java层的Surface只是个傀儡,这个方法调用了jni方法nativeLockCanvas():

//frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeLockCanvas(JNIEnv* env, jclass clazz,
        jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
    ......
    ANativeWindow_Buffer outBuffer;
    status_t err = surface->lock(&outBuffer, dirtyRectPtr);
    ......
    SkBitmap bitmap;
    ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);
    bitmap.setInfo(info, bpr);
    if (outBuffer.width > 0 && outBuffer.height > 0) {
        bitmap.setPixels(outBuffer.bits);
    } else {
        // be safe with an empty bitmap.
        bitmap.setPixels(NULL);
    }

    Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
    nativeCanvas->setBitmap(bitmap);
    ......
    return (jlong) lockedSurface.get();
}

上面通过surface->lock(&outBuffer, dirtyRectPtr) 获取了一个ANativeWindow_Buffer,并最后通过bitmap.setPixels(outBuffer.bits)将ANativeWindow_Buffer中的bits传给了SkBitmap。bits即为图形缓冲区的首地址,因此SkCanvas之后就可以通过这个首地址来输出UI数据了。现在的关键就是:outBuffer.bits是怎么拿到的?我们不妨先看看ANativeWindow_Buffer的来路:surface->lock(&outBuffer, dirtyRectPtr):

//frameworks/native/libs/gui/Surface.cpp
status_t Surface::lock(
        ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds)
{
    ......
    ANativeWindowBuffer* out;
    int fenceFd = -1;
    //拿到一个缓冲区
    status_t err = dequeueBuffer(&out, &fenceFd);
    if (err == NO_ERROR) {
        sp<GraphicBuffer> backBuffer(GraphicBuffer::getSelf(out));
        ......
        void* vaddr;
        status_t res = backBuffer->lockAsync(
                GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN,
                newDirtyRegion.bounds(), &vaddr, fenceFd);
        if (res != 0) {
            err = INVALID_OPERATION;
        } else {
            mLockedBuffer = backBuffer;
            outBuffer->width  = backBuffer->width;
            outBuffer->height = backBuffer->height;
            outBuffer->stride = backBuffer->stride;
            outBuffer->format = backBuffer->format;
            outBuffer->bits   = vaddr;
        }
    }
    return err;
}

可见lock()中也是通过dequeueBuffer()拿到缓冲区的,这个方法最终也是调用的BufferQueueProducer.dequeueBuffer()。指针 vaddr,即缓冲区的首地址,被保存到outBuffer的 bits中了。至于这个首地址是如何得到的,表明上是调用的GraphicBuffer.lockAsync(),最终调用的是 GraphicBufferMapper::lock():

//frameworks/native/libs/ui/GraphicBufferMapper.cpp
status_t GraphicBufferMapper::lock(buffer_handle_t handle,
        uint32_t usage, const Rect& bounds, void** vaddr)

    err = mAllocMod->lock(mAllocMod, handle, static_cast<int>(usage),
            bounds.left, bounds.top, bounds.width(), bounds.height(),
            vaddr);

    return err;
}

mAllocMod即gralloc_module_t,这里传入了一个handle。mAllocMod->lock()最终实现在/hardware/libhardware/modules/gralloc/mapper.cpp中:

//hardware/libhardware/modules/gralloc/mapper.cpp:
int gralloc_lock(gralloc_module_t const* /*module*/,
        buffer_handle_t handle, int /*usage*/,
        int /*l*/, int /*t*/, int /*w*/, int /*h*/,
        void** vaddr)
{
    if (private_handle_t::validate(handle) < 0)
        return -EINVAL;

    private_handle_t* hnd = (private_handle_t*)handle;
    *vaddr = (void*)hnd->base;
    return 0;
}

这里直接返回了handle->base作为缓冲区的首地址,其实我们在请求gralloc_module_t分配缓冲区时,也有一个handle,这给handle一直被保存在GraphicBuffer中,所以根据这个handle,gralloc_module_t就能找到之前分配的缓冲区了,这里handle就相当于一个句柄,类似Binder传输时的句柄一样。

到此时,SkCanvas总算彻底拿到了图形缓冲区的首地址,可以开始绘制了。

当绘制完成后,又会调用SurfaceHolder.unlockCanvasAndPost(canvas),最终调用了Surface.unlockAndPost():

//frameworks/native/libs/gui/Surface.cpp
status_t Surface::unlockAndPost()
{
    ......
    int fd = -1;
    status_t err = mLockedBuffer->unlockAsync(&fd);

    err = queueBuffer(mLockedBuffer.get(), fd);

    mPostedBuffer = mLockedBuffer;
    mLockedBuffer = 0;
    return err;
}

此时绘制已经完成,所以需要调用queueBuffer()将缓冲区重新放回到BufferQueue中,这里的queueBuffer()最终调用的是BufferQueueProducer.queueBuffer(),该方法将缓冲区入列后,会调用:

frameAvailableListener->onFrameAvailable(item);

frameAvailableListener最终的真身为Layer,还记得SurfaceFlinger在创建Surface时,会同时创建一个Layer吗?原来这个Layer还承担着监听queueBuffer()的责任。Layer在onFrameAvailable()中会调用SurfaceFlinger::signalLayerUpdate():

//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::signalLayerUpdate() {
    mEventQueue.invalidate();
}

mEventQueue为MessageQueue:

//frameworks/native/services/surfaceflinger/MessageQueue.cpp
void MessageQueue::invalidate() {
#if INVALIDATE_ON_VSYNC
    mEvents->requestNextVsync();
#else
    mHandler->dispatchInvalidate();
#endif
}

dispatchInvalidate()原来通知SurfaceFlinger刷新。

五:图形缓冲区的分配

之前将到SkCanvas最终通过GraphicBuffer中的句柄handle拿到了缓冲区的首地址,但是缓冲区的首地址究竟是何时被赋给了handle->base呢?上面说们在请求分配缓冲区时也有一个handle,是不是在分配缓冲区时赋的呢?

图形缓冲区分配的起点在BufferQueueProducer::allocateBuffers(),这个函数很长,下面就把核心代码粘出来:

    //frameworks/native/libs/gui/BufferQueueProducer.cpp
    void BufferQueueProducer::allocateBuffers(uint32_t width, uint32_t height,PixelFormat format, uint32_t usage) {
        size_t newBufferCount = 0;
        newBufferCount = mCore->mFreeSlots.size();
        Vector<sp<GraphicBuffer>> buffers;
        for (size_t i = 0; i <  newBufferCount; ++i) {
            status_t result = NO_ERROR;
            sp<GraphicBuffer> graphicBuffer(mCore->mAllocator->createGraphicBuffer(allocWidth, allocHeight, allocFormat, allocUsage, &result));
            buffers.push_back(graphicBuffer);
        }
    }

上面这部分主要是计算BufferQueueProducer中还剩多少空闲槽用来放缓冲区,然后循环体中去创建新的缓冲区填满空闲槽位。缓冲区的创建为:

mCore->mAllocator->createGraphicBuffer(allocWidth, allocHeight, allocFormat, allocUsage, &result)

这里的mAllocator为GraphicBufferAlloc:

//frameworks/native/libs/gui/GraphicBufferAlloc.cpp
sp<GraphicBuffer> GraphicBufferAlloc::createGraphicBuffer(uint32_t width,
        uint32_t height, PixelFormat format, uint32_t usage, status_t* error) {
    sp<GraphicBuffer> graphicBuffer(
            new GraphicBuffer(width, height, format, usage));
    return graphicBuffer;
}

这里直接new了一个GraphicBuffer:

//frameworks/native/libs/ui/GraphicBuffer.cpp
GraphicBuffer::GraphicBuffer(uint32_t inWidth, uint32_t inHeight,
        PixelFormat inFormat, uint32_t inUsage)
    : BASE(), mOwner(ownData), mBufferMapper(GraphicBufferMapper::get()),
      mInitCheck(NO_ERROR), mId(getUniqueId()), mGenerationNumber(0)
{
    width  =
    height =
    stride =
    format =
    usage  = 0;
    handle = NULL;
    mInitCheck = initSize(inWidth, inHeight, inFormat, inUsage);
}

可见,GraphicBuffer虽然被创建了,但是里面所有变量全没被初始化,不过幸亏最后调用了initSize():

//frameworks/native/libs/ui/GraphicBuffer.cpp
status_t GraphicBuffer::initSize(uint32_t inWidth, uint32_t inHeight,
        PixelFormat inFormat, uint32_t inUsage)
{
    GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
    uint32_t outStride = 0;
    status_t err = allocator.alloc(inWidth, inHeight, inFormat, inUsage,
            &handle, &outStride);
    if (err == NO_ERROR) {
        width = static_cast<int>(inWidth);
        height = static_cast<int>(inHeight);
        format = inFormat;
        usage = static_cast<int>(inUsage);
        stride = static_cast<int>(outStride);
    }
    return err;
}

你看,这里才是真正的初始化及分配缓冲区的地方,重点在于这句:

status_t err = allocator.alloc(inWidth, inHeight, inFormat, inUsage, &handle, &outStride);

这里又调用了GraphicBufferAllocator::alloc()来进行分配,并且传入了很多参数,其中就包括了GraphicBuffe的handle。GraphicBufferAllocator最终调用的也是gralloc_module_t->alloc(), alloc()中会根据usage来判断分配什么缓冲区:

//hardware/libhardware/mudules/gralloc/gralloc.cpp
if (usage & GRALLOC_USAGE_HW_FB) {
    err = gralloc_alloc_framebuffer(dev, size, usage, pHandle);
} else {
    err = gralloc_alloc_buffer(dev, size, usage, pHandle);
}

注意我们这里讲的是分配图形缓冲区,而不是帧缓冲区,帧缓冲区的分配为gralloc_alloc_framebuffer(),图形缓冲区的分配方法为gralloc_alloc_buffer():

//hardware/libhardware/mudules/gralloc/gralloc.cpp
static int gralloc_alloc_buffer(alloc_device_t* dev,
        size_t size, int /*usage*/, buffer_handle_t* pHandle)
{
    int err = 0;
    int fd = -1;

    size = roundUpToPageSize(size);
    
    fd = ashmem_create_region("gralloc-buffer", size);
  
    if (err == 0) {
        private_handle_t* hnd = new private_handle_t(fd, size, 0);
        gralloc_module_t* module = reinterpret_cast<gralloc_module_t*>(
                dev->common.module);
        err = mapBuffer(module, hnd);
        if (err == 0) {
            *pHandle = hnd;
        }
    }
        
    return err;

这里创建了一块匿名共享内存,并返回了一个文件描述符fd,接下来创建了一个private_handle_t对象,并传入了匿名共享内存块的fd及size,最后赋给了pHandle,pHandle即为GraphicBuffe中的句柄handle。这里的关键点为mapBuffer():

//hardware/libhardware/mudules/gralloc/mapper.cpp
int mapBuffer(gralloc_module_t const* module,
        private_handle_t* hnd)
{
    void* vaddr;
    return gralloc_map(module, hnd, &vaddr);
}

直接调用了gralloc_map():

//hardware/libhardware/mudules/gralloc/mapper.cpp
static int gralloc_map(gralloc_module_t const* /*module*/,
        buffer_handle_t handle,
        void** vaddr)
{
    private_handle_t* hnd = (private_handle_t*)handle;
    if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER)) {
        size_t size = hnd->size;
        void* mappedAddress = mmap(0, size,
                PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);
        hnd->base = uintptr_t(mappedAddress) + hnd->offset;
    }
    *vaddr = (void*)hnd->base;
    return 0;
}

这儿利用mmap()将创建的匿名共享内存映射到了SurfaceFlinger所在进程中地址mappedAddress上,并强转为int赋给了hnd->base(hnd->offset 这时为0),你看,果然是在分配的时候将地址赋给了handle.base。

六:缓冲区的映射

上面讲到,SurfaceFlinger进程请求Gralloc分配缓冲区后,调用mmap()将缓冲区映射到了自己的内存中间中,但是opengl或者skia向缓冲区中写内容是在app进程中,而因为每个进程都有自己的内存空间,所以通过匿名共享内存创建的缓冲区映射到SurfaceFlinger进程的地址在app端是不能直接用的,但是上面在app进程中,我们直接将GraphicBuffe句柄handle->base(缓冲区首地址)赋给了图形库Skia,这又是怎么回事呢?

SurfaceFlinger不是上帝,当然也不能特殊化。这里我们需要知道,GraphicBuffer是需要从SurfaceFlinger跨进程传输到app进程的,而C++层的对象要支持跨进程传输,必须实现Flattenable接口,用来进行可序列化。

在GraphicBuffer被传输到App进程时,会调用它的unflatten()方法,unflatten()中调用了GraphicBufferMapper.registerBuffer(handle),最终调用了gralloc_register_buffer():

//hardware/libhardware/modules/gralloc/mapper.cpp:
int gralloc_register_buffer(gralloc_module_t const* module,
        buffer_handle_t handle)
{
    private_handle_t* hnd = (private_handle_t*)handle;
    void *vaddr;
    return gralloc_map(module, handle, &vaddr);
}

这里出现了熟悉的gralloc_map(),还记得上面我们在分配图形缓冲区时,调用它将缓冲区映射到SurfaceFlinger进程吗?这里又调用了一遍它,根据缓冲区的文件描述符fd,将缓冲区再次映射到App进程中。

总结:

总结一下上面的内容,主要为以下几步:

1.App端请求SurfaceFlinger进程的BufferQueue分配图形缓冲区;

2.BufferQueue通过Gralloc创建了一块匿名共享内存,并将其文件描述符fd通过GraphicBuffer传给了App进程;

3.App进程通过fd将缓冲区映射到自己的内存空间中,并将首地址传递给图形库。

4.图形库根据首地址进行绘制。

5.App端绘制完成后将缓冲区交还给BufferQueue。

6.queueBuffer()时,Layer通知SurfaceFlinger刷新。

关于Android绘图机制其实还有很多内容,关于这方面的资料也很多,不过最重要的还是要自己去阅读源码,梳理逻辑。