OkHttp 源码剖析系列(七)——请求的发起及响应的读取

1,900 阅读18分钟

你好,我是 N0tExpectErr0r,一名热爱技术的 Android 开发

我的个人博客:blog.N0tExpectErr0r.cn

OkHttp 源码剖析系列文章目录:

OkHttp 源码剖析系列(一)——请求的发起及拦截器机制概述

OkHttp 源码剖析系列(二)——拦截器整体流程分析

OkHttp 源码剖析系列(三)——缓存机制

OkHttp 源码剖析系列(四)——连接建立概述

OkHttp 源码剖析系列(五)——代理路由选择

OkHttp 源码剖析系列(六)——连接复用机制及连接的建立

OkHttp 源码剖析系列(七)——请求的发起及响应的读取

终于来到了我们 OkHttp 的最后一个部分——请求的发起。让我们回顾一下 CallServerInterceptor 的大体流程:

  1. 调用 exchange.writeRequestHeaders 写入请求头
  2. 调用 exchange.createRequestBody 获取 Sink
  3. 调用 ResponseBody.writeTo 写入请求体
  4. 调用 exchange.readResponseHeaders 读入响应头
  5. 调用 exchange.openResponseBody 方法读取响应体

而我们知道,Exchange 最后实际上转调到了 ExchangeCodec 中的对应方法,而 ExchangeCodec 有两个实现——Http1ExchangeCodecHttp2ExchangeCodec

它们的创建过程在创建连接的过程中的 RealConnection.newCodec 方法中实现:

ExchangeCodec newCodec(OkHttpClient client, Interceptor.Chain chain) throws SocketException {
    if (http2Connection != null) {
        return new Http2ExchangeCodec(client, this, chain, http2Connection);
    } else {
        socket.setSoTimeout(chain.readTimeoutMillis());
        source.timeout().timeout(chain.readTimeoutMillis(), MILLISECONDS);
        sink.timeout().timeout(chain.writeTimeoutMillis(), MILLISECONDS);
        return new Http1ExchangeCodec(client, this, source, sink);
    }
}

实际上是根据 Http2Connection 是否为 null 进行判断。

下面我们分别对 HTTP1 中及 HTTP2 中的处理进行分析:

HTTP/1.x

writeRequestHeaders

@Override
public void writeRequestHeaders(Request request) throws IOException {
    String requestLine = RequestLine.get(
            request, realConnection.route().proxy().type());
    writeRequest(request.headers(), requestLine);
}

这里首先调用了 RequestLine.get 方法获取到了 requestLine 这个 String,之后通过调用 writeRequest 方法将其写入。

我们首先看到 RequestLine.get 方法:

/**
 * Returns the request status line, like "GET / HTTP/1.1". This is exposed to the application by
 * {@link HttpURLConnection#getHeaderFields}, so it needs to be set even if the transport is
 * HTTP/2.
 */
public static String get(Request request, Proxy.Type proxyType) {
    StringBuilder result = new StringBuilder();
    result.append(request.method());
    result.append(' ');
    if (includeAuthorityInRequestLine(request, proxyType)) {
        result.append(request.url());
    } else {
        result.append(requestPath(request.url()));
    }
    result.append(" HTTP/1.1");
    return result.toString();
}

这里实际上就是在构建 HTTP 协议中的第一行,包括请求的 method、url、HTTP版本等信息。

我们接着看到 writeRequest 方法:

/**
 * Returns bytes of a request header for sending on an HTTP transport.
 */
public void writeRequest(Headers headers, String requestLine) throws IOException {
    if (state != STATE_IDLE) throw new IllegalStateException("state: " + state);
    sink.writeUtf8(requestLine).writeUtf8("\r\n");
    for (int i = 0, size = headers.size(); i < size; i++) {
        sink.writeUtf8(headers.name(i))
                .writeUtf8(": ")
                .writeUtf8(headers.value(i))
                .writeUtf8("\r\n");
    }
    sink.writeUtf8("\r\n");
    state = STATE_OPEN_REQUEST_BODY;
}

这里首先写入了 statusLine,之后将 Header 以 key:value 的形式写入,最后写入了一个空行,到这里我们的请求头就成功写入了(具体可以看到 HTTP/1.x 的请求格式)。

createRequestBody

@Override
public Sink createRequestBody(Request request, long contentLength) throws IOException {
    if (request.body() != null && request.body().isDuplex()) {
        throw new ProtocolException("Duplex connections are not supported for HTTP/1");
    }
    if ("chunked".equalsIgnoreCase(request.header("Transfer-Encoding"))) {
        // Stream a request body of unknown length.
        return newChunkedSink();
    }
    if (contentLength != -1L) {
        // Stream a request body of a known length.
        return newKnownLengthSink();
    }
    throw new IllegalStateException(
            "Cannot stream a request body without chunked encoding or a known content length!");
}

这里首先对 Transfer-Encoding:chunked 的情况进行了处理,返回了 newChunkedSink 方法的结果,之若 contentLength 是确定的,则返回 newKnownLengthSink 方法的结果。

让我们分别看到这两个方法。

newChunkedSink

private Sink newChunkedSink() {
    if (state != STATE_OPEN_REQUEST_BODY) throw new IllegalStateException("state: " + state);
    state = STATE_WRITING_REQUEST_BODY;
    return new ChunkedSink();
}

其实这里就是构建并返回了一个继承于 SinkChunkedSink 对象,我们可以看看它的 write 方法:

@Override
public void write(Buffer source, long byteCount) throws IOException {
    if (closed) throw new IllegalStateException("closed");
    if (byteCount == 0) return;
    sink.writeHexadecimalUnsignedLong(byteCount);
    sink.writeUtf8("\r\n");
    sink.write(source, byteCount);
    sink.writeUtf8("\r\n");
}

首先写入了十六进制的数据大小,之后写入了数据。

newKnownLengthSink

private Sink newKnownLengthSink() {
    if (state != STATE_OPEN_REQUEST_BODY) throw new IllegalStateException("state: " + state);
    state = STATE_WRITING_REQUEST_BODY;
    return new KnownLengthSink();
}

这里也是构建并返回了一个继承于 SinkKnownLengthSink 对象,我们可以看到其 write 方法:

@Override
public void write(Buffer source, long byteCount) throws IOException {
    if (closed) throw new IllegalStateException("closed");
    checkOffsetAndCount(source.size(), 0, byteCount);
    sink.write(source, byteCount);
}

可以看到,其实就是对数据进行写入,没有非常特别的地方。

readRequestHeaders

@Override
public Response.Builder readResponseHeaders(boolean expectContinue) throws IOException {
    if (state != STATE_OPEN_REQUEST_BODY && state != STATE_READ_RESPONSE_HEADERS) {
        throw new IllegalStateException("state: " + state);
    }
    try {
    	// 读取statusLine
        StatusLine statusLine = StatusLine.parse(readHeaderLine());
        // 构建 Response(包含status信息及Header)
        Response.Builder responseBuilder = new Response.Builder()
                .protocol(statusLine.protocol)
                .code(statusLine.code)
                .message(statusLine.message)
                .headers(readHeaders());
        if (expectContinue && statusLine.code == HTTP_CONTINUE) {
            return null;
        } else if (statusLine.code == HTTP_CONTINUE) {
            state = STATE_READ_RESPONSE_HEADERS;
            return responseBuilder;
        }
        state = STATE_OPEN_RESPONSE_BODY;
        return responseBuilder;
    } catch (EOFException e) {
        // Provide more context if the server ends the stream before sending a response.
        String address = "unknown";
        if (realConnection != null) {
            address = realConnection.route().address().url().redact();
        }
        throw new IOException("unexpected end of stream on "
                + address, e);
    }
}

可以看到,这里主要是进行了两件事:

  1. 调用 readHeaderLine 方法读取首行并调用 StatusLine.parse 方法构建 StatusLine 对象
  2. 调用 readHeaders 方法读取响应头并构建 Response

我们先看到 readHeaderLine 方法:

private String readHeaderLine() throws IOException {
    String line = source.readUtf8LineStrict(headerLimit);
    headerLimit -= line.length();
    return line;
}

其实就是读取了服务端发送来的数据的第一行。我们接着看到 SourceLine.parse 方法:

public static StatusLine parse(String statusLine) throws IOException {
    // H T T P / 1 . 1   2 0 0   T e m p o r a r y   R e d i r e c t
    // 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
    // Parse protocol like "HTTP/1.1" followed by a space.
    int codeStart;
    Protocol protocol;
    // 如果是 HTTP/1.x
    if (statusLine.startsWith("HTTP/1.")) {
        if (statusLine.length() < 9 || statusLine.charAt(8) != ' ') {
            throw new ProtocolException("Unexpected status line: " + statusLine);
        }
        int httpMinorVersion = statusLine.charAt(7) - '0';
        codeStart = 9;
        // 根据HTTP版本号填入HTTP/1.0或HTTP/1.1
        if (httpMinorVersion == 0) {
            protocol = Protocol.HTTP_1_0;
        } else if (httpMinorVersion == 1) {
            protocol = Protocol.HTTP_1_1;
        } else {
            throw new ProtocolException("Unexpected status line: " + statusLine);
        }
    } else if (statusLine.startsWith("ICY ")) {
        // 开头为ICY则说明是HTTP/1.0
        protocol = Protocol.HTTP_1_0;
        codeStart = 4;
    } else {
        throw new ProtocolException("Unexpected status line: " + statusLine);
    }
    // Parse response code like "200". Always 3 digits.
    if (statusLine.length() < codeStart + 3) {
        throw new ProtocolException("Unexpected status line: " + statusLine);
    }
    int code;
    try {
    	// 获取响应码
        code = Integer.parseInt(statusLine.substring(codeStart, codeStart + 3));
    } catch (NumberFormatException e) {
        throw new ProtocolException("Unexpected status line: " + statusLine);
    }
    // Parse an optional response message like "OK" or "Not Modified". If it
    // exists, it is separated from the response code by a space.
    String message = "";
    if (statusLine.length() > codeStart + 3) {
        if (statusLine.charAt(codeStart + 3) != ' ') {
            throw new ProtocolException("Unexpected status line: " + statusLine);
        }
        // 获取Message
        message = statusLine.substring(codeStart + 4);
    }
    return new StatusLine(protocol, code, message);
}

这里主要做了三件事:

  1. 根据 StatusLine 的开头判断并填入协议的类型(可能是 HTTP/1.0 或 HTTP/1.1)
  2. 填入响应码
  3. 填入 Message

openResponseBodySource

@Override
public Source openResponseBodySource(Response response) {
    if (!HttpHeaders.hasBody(response)) {
        return newFixedLengthSource(0);
    }
    if ("chunked".equalsIgnoreCase(response.header("Transfer-Encoding"))) {
        return newChunkedSource(response.request().url());
    }
    long contentLength = HttpHeaders.contentLength(response);
    if (contentLength != -1) {
        return newFixedLengthSource(contentLength);
    }
    return newUnknownLengthSource();
}

这里根据是否知道 body 的长度,以及是否为 chunked,分别返回了 newFixedLengthSourcenewChunkedSource 的返回值。

newFixedLengthSource

private Source newFixedLengthSource(long length) {
    if (state != STATE_OPEN_RESPONSE_BODY) throw new IllegalStateException("state: " + state);
    state = STATE_READING_RESPONSE_BODY;
    return new FixedLengthSource(length);
}

这里实际上就是构建了一个继承了 AbstractSourceFixedLengthSource 类,我们看到其 read 方法:

@Override
public long read(Buffer sink, long byteCount) throws IOException {
    if (byteCount < 0) throw new IllegalArgumentException("byteCount < 0: " + byteCount);
    if (closed) throw new IllegalStateException("closed");
    if (bytesRemaining == 0) return -1;
    long read = super.read(sink, Math.min(bytesRemaining, byteCount));
    if (read == -1) {
        realConnection.noNewExchanges(); // The server didn t supply the promised content length.
        ProtocolException e = new ProtocolException("unexpected end of stream");
        responseBodyComplete();
        throw e;
    }
    bytesRemaining -= read;
    if (bytesRemaining == 0) {
        responseBodyComplete();
    }
    return read;
}

这里实际上是调用了父类的 read 方法,并对读取到的 length 进行了确认。

newChunkedSource

private Source newChunkedSource(HttpUrl url) {
    if (state != STATE_OPEN_RESPONSE_BODY) throw new IllegalStateException("state: " + state);
    state = STATE_READING_RESPONSE_BODY;
    return new ChunkedSource(url);
}

这里实际上也是构建了一个继承 AbstractSourceChunkedSource 类,我们看到其 read 方法:

@Override
public long read(Buffer sink, long byteCount) throws IOException {
    if (byteCount < 0) throw new IllegalArgumentException("byteCount < 0: " + byteCount);
    if (closed) throw new IllegalStateException("closed");
    if (!hasMoreChunks) return -1;
    if (bytesRemainingInChunk == 0 || bytesRemainingInChunk == NO_CHUNK_YET) {
        readChunkSize();
        if (!hasMoreChunks) return -1;
    }
    long read = super.read(sink, Math.min(byteCount, bytesRemainingInChunk));
    if (read == -1) {
        realConnection.noNewExchanges(); // The server didn t supply the promised chunk length.
        ProtocolException e = new ProtocolException("unexpected end of stream");
        responseBodyComplete();
        throw e;
    }
    bytesRemainingInChunk -= read;
    return read;
}

这里首先调用了 readChunkSize 读取了 Chunk 的大小,之后调用了父类的 read 方法读取了这个 chunk 对应的数据。

我们看到 readChunkSize 方法:

private void readChunkSize() throws IOException {
    // Read the suffix of the previous chunk.
    if (bytesRemainingInChunk != NO_CHUNK_YET) {
        source.readUtf8LineStrict();
    }
    try {
        bytesRemainingInChunk = source.readHexadecimalUnsignedLong();
        String extensions = source.readUtf8LineStrict().trim();
        if (bytesRemainingInChunk < 0 || (!extensions.isEmpty() && !extensions.startsWith(";"))) {
            throw new ProtocolException("expected chunk size and optional extensions but was \""
                    + bytesRemainingInChunk + extensions + "\"");
        }
    } catch (NumberFormatException e) {
        throw new ProtocolException(e.getMessage());
    }
    if (bytesRemainingInChunk == 0L) {
        hasMoreChunks = false;
        trailers = readHeaders();
        HttpHeaders.receiveHeaders(client.cookieJar(), url, trailers);
        responseBodyComplete();
    }
}

这里首先读入了一个十六进制的数,也就是 Chunk 的大小,之后若读入的大小为 0,则说明后续没有更多数据,直接返回了 -1(EOF)。

到这里,HTTP/1.x 的写入及读取过程就分析完毕了

HTTP/2

HTTP/2 流量控制

我们先来研究一下 HTTP/2 的流量控制机制,这与我们接下来的源码分析有关。

HTTP/2 中存在着流量控制机制,它的目标是在不改变协议的情况下允许使用多种流量控制算法。

它具有如下的特征(摘自参考资料):

  1. 流量控制是特定于一个连接的。每种类型的流量控制都是在单独的两个端点之间的。
  2. 流量控制基于 WINDOW_UPDATE 帧。接收方公布自己打算在每个 Stream 以及整个连接上分别接收多少字节。
  3. 流量控制是有方向的,由接收者全面控制。接收方可以为每个流和整个连接设置任意的窗口大小。发送方必须尊重接收方设置的流量控制限制。
  4. 无论是新流还是整个连接,流量控制窗口的初始值是65535字节。
  5. 帧的类型决定了流量控制是否适用于帧。只有 DATA 帧服从流量控制,所有其它类型的帧并不消耗流量控制窗口的空间,从而保证重要的控制帧不会因流量控制阻塞。
  6. 流量控制不能被禁用。
  7. HTTP/2 只定义了 WINDOW_UPDATE 帧的格式和语义,没有规定接收方如何决定何时发送帧、发送什么样的值,具体实现可以选择任何满足需求的算法。

HTTP/2 中采用了 WINDOW_UPDATE 帧来通知对端增加窗口的大小,在它的数据中会指定增加的窗口大小,从而告诉对方自己有足够的空间处理新的数据。

在 OkHttp 中实现了 HTTP/2 中的流量机制,它限制了同时能发送的数据大小,其默认值为 65535,当发送数据时,若窗口大小不足,则会进行阻塞,直到窗口有空闲大小。这样的设计我想是因为 HTTP/2 中采用了请求的多路复用机制,多个请求可以复用同一条连接进行并发地进行。如果同时在一条连接上进行的网络请求太多会对网络造成拥塞。因此为了保证网络的畅通性,对每条连接采取了这种窗口机制,限制了每条连接最大发送的数据量

writeRequestHeaders

我们接着看到 Http2ExchangeCodec 的代码。

@Override
public void writeRequestHeaders(Request request) throws IOException {
    if (stream != null) return;
    boolean hasRequestBody = request.body() != null;
    List<Header> requestHeaders = http2HeadersList(request);
    stream = connection.newStream(requestHeaders, hasRequestBody);
    // We may have been asked to cancel while creating the new stream and sending the request
    // headers, but there was still no stream to close.
    if (canceled) {
        stream.closeLater(ErrorCode.CANCEL);
        throw new IOException("Canceled");
    }
    stream.readTimeout().timeout(chain.readTimeoutMillis(), TimeUnit.MILLISECONDS);
    stream.writeTimeout().timeout(chain.writeTimeoutMillis(), TimeUnit.MILLISECONDS);
}

首先,这里调用了 http2HeaderList 方法获取了 Header 的 List,之后调用了 connection.newStream 方法初始化并获取了 Http2Stream 对象。

我们先看到 http2HeaderList 做了什么:

public static List<Header> http2HeadersList(Request request) {
    Headers headers = request.headers();
    List<Header> result = new ArrayList<>(headers.size() + 4);
    result.add(new Header(TARGET_METHOD, request.method()));
    result.add(new Header(TARGET_PATH, RequestLine.requestPath(request.url())));
    String host = request.header("Host");
    if (host != null) {
        result.add(new Header(TARGET_AUTHORITY, host)); // Optional.
    }
    result.add(new Header(TARGET_SCHEME, request.url().scheme()));
    for (int i = 0, size = headers.size(); i < size; i++) {
        // header names must be lowercase.
        String name = headers.name(i).toLowerCase(Locale.US);
        if (!HTTP_2_SKIPPED_REQUEST_HEADERS.contains(name)
                || name.equals(TE) && headers.value(i).equals("trailers")) {
            result.add(new Header(name, headers.value(i)));
        }
    }
    return result;
}

这里实际上就是在将 request.headers 转变为一个 List<Header>

我们接着看一下 connection.newStream 方法:

private Http2Stream newStream(
        int associatedStreamId, List<Header> requestHeaders, boolean out) throws IOException {
    boolean outFinished = !out;
    boolean inFinished = false;
    boolean flushHeaders;
    Http2Stream stream;
    int streamId;
    synchronized (writer) {
        synchronized (this) {
            // 计算当前Stream的id
            if (nextStreamId > Integer.MAX_VALUE / 2) {
                shutdown(REFUSED_STREAM);
            }
            if (shutdown) {
                throw new ConnectionShutdownException();
            }
            streamId = nextStreamId;
            nextStreamId += 2;	
            stream = new Http2Stream(streamId, this, outFinished, inFinished, null);
            // ...
        }
        if (associatedStreamId == 0) {
            writer.headers(outFinished, streamId, requestHeaders);
        } else if (client) {
            throw new IllegalArgumentException("client streams shouldn't have associated stream IDs");
        } else { // HTTP/2 has a PUSH_PROMISE frame.
            writer.pushPromise(associatedStreamId, streamId, requestHeaders);
        }
    }
    if (flushHeaders) {
        writer.flush();
    }
    return stream;
}

可以看到,这里首先计算了当前请求对应的 Stream 的 id,之后用其构建了一个 Http2Stream 对象。然后对于我们刚刚写入的 header,这里的 associatedStreamId 为 0,会调用到 writer.headers 写入 Header 信息。若 associatedStreamId 不为 0,则会调用 writer.pushPromise 方法,写入 PUSH_PROMISE 帧,它可以参考这篇博客:PUSH_PROMISE帧

我们看到 writer.headers 方法究竟做了什么:

public synchronized void headers(
        boolean outFinished, int streamId, List<Header> headerBlock) throws IOException {
    if (closed) throw new IOException("closed");
    hpackWriter.writeHeaders(headerBlock);
    long byteCount = hpackBuffer.size();
    int length = (int) Math.min(maxFrameSize, byteCount);
    byte type = TYPE_HEADERS;
    byte flags = byteCount == length ? FLAG_END_HEADERS : 0;
    if (outFinished) flags |= FLAG_END_STREAM;
    frameHeader(streamId, length, type, flags);
    sink.write(hpackBuffer, length);
    if (byteCount > length) writeContinuationFrames(streamId, byteCount - length);
}

这里调用了 hpackWriter.writeHeaders 对 Header 进行了 HPACK 加密,然后调用 frameHeader 方法写入帧头,之后将 Header 的数据写入 sink 中。(在 HTTP/2 中会对 Header 的信息进行 HPACK 加密)

createRequestBody

@Override
public Sink createRequestBody(Request request, long contentLength) {
    return stream.getSink();
}

这里返回了 Http2Stream 中的 sink,它是一个继承自 SinkFramingSink 对象,我们看看其 write 方法:

@Override
public void write(Buffer source, long byteCount) throws IOException {
    assert (!Thread.holdsLock(Http2Stream.this));
    sendBuffer.write(source, byteCount);
    while (sendBuffer.size() >= EMIT_BUFFER_SIZE) {
        emitFrame(false);
    }
}

这里先将数据写入了 sendBuffer 中,之后不断调用 emitFramesendBuffer 中的数据以数据帧的形式发出。让我们看看 emitFrame 做了什么:

/**
 * Emit a single data frame to the connection. The frame's size be limited by this stream's
 * write window. This method will block until the write window is nonempty.
 */
private void emitFrame(boolean outFinishedOnLastFrame) throws IOException {
    long toWrite;
    synchronized (Http2Stream.this) {
        writeTimeout.enter();
        try {
            while (bytesLeftInWriteWindow <= 0 && !finished && !closed && errorCode == null) {
                waitForIo(); // Wait until we receive a WINDOW_UPDATE for this stream.
            }
        } finally {
            writeTimeout.exitAndThrowIfTimedOut();
        }
        checkOutNotClosed(); // Kick out if the stream was reset or closed while waiting.
        toWrite = Math.min(bytesLeftInWriteWindow, sendBuffer.size());
        bytesLeftInWriteWindow -= toWrite;
    }
    writeTimeout.enter();
    try {
        boolean outFinished = outFinishedOnLastFrame && toWrite == sendBuffer.size();
        connection.writeData(id, outFinished, sendBuffer, toWrite);
    } finally {
        writeTimeout.exitAndThrowIfTimedOut();
    }
}

可以看到,这里首先会阻塞直到 bytesLeftInWriteWindow 有足够的空间,之后会调用 connection.writeData 方法写入 bytesLeftInWriteWindow 剩余大小的数据。这里与 HTTP/2 的流量控制机制有关,我们放到后面介绍。

我们先看看 connection.writeData 做了什么:

public void writeData(int streamId, boolean outFinished, Buffer buffer, long byteCount)
        throws IOException {
    if (byteCount == 0) { // Empty data frames are not flow-controlled.
        writer.data(outFinished, streamId, buffer, 0);
        return;
    }
    while (byteCount > 0) {
        int toWrite;
        synchronized (Http2Connection.this) {
            try {
                while (bytesLeftInWriteWindow <= 0) {
                    // Before blocking, confirm that the stream we're writing is still open. It's possible
                    // that the stream has since been closed (such as if this write timed out.)
                    if (!streams.containsKey(streamId)) {
                        throw new IOException("stream closed");
                    }
                    Http2Connection.this.wait(); // Wait until we receive a WINDOW_UPDATE.
                }
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt(); // Retain interrupted status.
                throw new InterruptedIOException();
            }
            toWrite = (int) Math.min(byteCount, bytesLeftInWriteWindow);
            toWrite = Math.min(toWrite, writer.maxDataLength());
            bytesLeftInWriteWindow -= toWrite;
        }
        byteCount -= toWrite;
        writer.data(outFinished && byteCount == 0, streamId, buffer, toWrite);
    }
}

可以看到,这里首先计算了能写入的大小,不能超过剩余窗口大小及设定的每个帧限制大小(默认为 0x4000,即 16384)

之后调用了 writer.data 方法进行了数据帧的写入:

public synchronized void data(boolean outFinished, int streamId, Buffer source, int byteCount)
        throws IOException {
    if (closed) throw new IOException("closed");
    byte flags = FLAG_NONE;
    if (outFinished) flags |= FLAG_END_STREAM;
    dataFrame(streamId, flags, source, byteCount);
}

最后调用到了 dataFrame 方法写入了一个数据帧:

void dataFrame(int streamId, byte flags, Buffer buffer, int byteCount) throws IOException {
    byte type = TYPE_DATA;
    frameHeader(streamId, byteCount, type, flags);
    if (byteCount > 0) {
        sink.write(buffer, byteCount);
    }
}

这里先写入了一个帧头,之后写入了对应的数据。

readResponseHeaders

@Override
public Response.Builder readResponseHeaders(boolean expectContinue) throws IOException {
    Headers headers = stream.takeHeaders();
    Response.Builder responseBuilder = readHttp2HeadersList(headers, protocol);
    if (expectContinue && Internal.instance.code(responseBuilder) == HTTP_CONTINUE) {
        return null;
    }
    return responseBuilder;
}

这里首先调用了 stream.takeHeaders 获取到了响应中的 Headers,之后调用了 readHttp2HeadersList 方法构建了 Response.Builder

我们先看到 stream.takeHeaders 方法:

/**
 * Removes and returns the stream's received response headers, blocking if necessary until headers
 * have been received. If the returned list contains multiple blocks of headers the blocks will be
 * delimited by 'null'.
 */
public synchronized Headers takeHeaders() throws IOException {
    readTimeout.enter();
    try {
        while (headersQueue.isEmpty() && errorCode == null) {
            waitForIo();
        }
    } finally {
        readTimeout.exitAndThrowIfTimedOut();
    }
    if (!headersQueue.isEmpty()) {
        return headersQueue.removeFirst();
    }
    throw errorException != null ? errorException : new StreamResetException(errorCode);
}

这里在等待 headersQueue 中出现了新的 Header,看来真正的读取过程不在这里,这里仅仅是在等待获取 Header,而数据真正的获取过程在其它地方,我们在后面再讨论真正的数据读取的地方在哪里。

拿到 Header 后,调用了 readHttp2HeadersList 方法构建 Response.Builder

/**
 * Returns headers for a name value block containing an HTTP/2 response.
 */
public static Response.Builder readHttp2HeadersList(Headers headerBlock,
                                                    Protocol protocol) throws IOException {
    StatusLine statusLine = null;
    Headers.Builder headersBuilder = new Headers.Builder();
    for (int i = 0, size = headerBlock.size(); i < size; i++) {
        String name = headerBlock.name(i);
        String value = headerBlock.value(i);
        if (name.equals(RESPONSE_STATUS_UTF8)) {
            statusLine = StatusLine.parse("HTTP/1.1 " + value);
        } else if (!HTTP_2_SKIPPED_RESPONSE_HEADERS.contains(name)) {
            Internal.instance.addLenient(headersBuilder, name, value);
        }
    }
    if (statusLine == null) throw new ProtocolException("Expected ':status' header not present");
    return new Response.Builder()
            .protocol(protocol)
            .code(statusLine.code)
            .message(statusLine.message)
            .headers(headersBuilder.build());
}

这里实际上就是对刚刚的 Header 中比较特殊的 Header 进行了处理,之后设置进了创建的 Response.Builder

openResponseBodySource

@Override
public Source openResponseBodySource(Response response) {
    return stream.getSource();
}

这里直接返回了 stream.source,而这个 source 实际上是 FramingSource 对象,我们看看其 read 方法:

@Override
public long read(Buffer sink, long byteCount) throws IOException {
    if (byteCount < 0) throw new IllegalArgumentException("byteCount < 0: " + byteCount);
    while (true) {
        long readBytesDelivered = -1;
        IOException errorExceptionToDeliver = null;
        // 1. Decide what to do in a synchronized block.
        synchronized (Http2Stream.this) {
            readTimeout.enter();
            try {
                // ...
                if (closed) {
                    throw new IOException("stream closed");
                } else if (readBuffer.size() > 0) {
                    // 读取数据
                    readBytesDelivered = readBuffer.read(sink, Math.min(byteCount, readBuffer.size()));
                    unacknowledgedBytesRead += readBytesDelivered;
                    if (errorExceptionToDeliver == null
                            && unacknowledgedBytesRead
                            >= connection.okHttpSettings.getInitialWindowSize() / 2) {
                    	// 通知对方增加窗口大小
                        connection.writeWindowUpdateLater(id, unacknowledgedBytesRead);
                        unacknowledgedBytesRead = 0;
                    }
                } else if (!finished && errorExceptionToDeliver == null) {
                    // 等待I/O
                    waitForIo();
                    continue;
                }
            } finally {
                readTimeout.exitAndThrowIfTimedOut();
            }
        }
        if (readBytesDelivered != -1) {
            // Update connection.unacknowledgedBytesRead outside the synchronized block.
            updateConnectionFlowControl(readBytesDelivered);
            return readBytesDelivered;
        }
        // ...
        return -1; // This source is exhausted.
    }
}

首先这里如果 I/O 未结束,会阻塞等待 I/O 结束。当 I/O 结束后,会调用 read 方法进行数据的读取。(这也说明了真正的数据获取不是在这里进行,而是在其它地方)

此时若我们这边已经接收的数据大小超过了窗口的大小,则会调用 connection.writeWindowUpdateLater 方法通知对方增加窗口大小,增加的大小为我们已接收的数据大小,也就说明我们这边有多余的能力来处理更多流量。我们可以看到 connection.writeWindowUpdateLater 方法:


void writeWindowUpdateLater(final int streamId, final long unacknowledgedBytesRead) {
    try {
        writerExecutor.execute(
                new NamedRunnable("OkHttp Window Update %s stream %d", connectionName, streamId) {
                    @Override
                    public void execute() {
                        try {
                            writer.windowUpdate(streamId, unacknowledgedBytesRead);
                        } catch (IOException e) {
                            failConnection(e);
                        }
                    }
                });
    } catch (RejectedExecutionException ignored) {
        // This connection has been closed.
    }
}

可以看出,这里实际上是通过向对方发送一个 WINDOW_UPDATE 帧来实现的,由于是耗时操作,因此这里采用了异步的方式。

数据的读取

那么我们的数据究竟是在哪里进行读取的呢?我们可以看看前面的 headerQueue 在何时会被添加新的 Header,其中在 receiveHeaders 方法中对 headerQueue 进行了添加操作。这应该就是我们要找的方法了:

/**
 * Accept headers from the network and store them until the client calls {@link #takeHeaders}, or
 * {@link FramingSource#read} them.
 */
void receiveHeaders(Headers headers, boolean inFinished) {
    assert (!Thread.holdsLock(Http2Stream.this));
    boolean open;
    synchronized (this) {
        if (!hasResponseHeaders || !inFinished) {
            hasResponseHeaders = true;
            headersQueue.add(headers);
        } else {
            this.source.trailers = headers;
        }
        if (inFinished) {
            this.source.finished = true;
        }
        open = isOpen();
        notifyAll();
    }
    if (!open) {
        connection.removeStream(id);
    }
}

那它又是何时被调用的呢?它被 ReadRunnable.headers 方法所调用,而此方法又被 Http2Reader.readHeaders 方法调用,最后我们找到了 Http2Reader.nextFrame 方法:

public boolean nextFrame(boolean requireSettings, Handler handler) throws IOException {
    try {
        source.require(9); // Frame header size
    } catch (EOFException e) {
        return false; // This might be a normal socket close.
    }
    int length = readMedium(source);
    if (length < 0 || length > INITIAL_MAX_FRAME_SIZE) {
        throw ioException("FRAME_SIZE_ERROR: %s", length);
    }
    byte type = (byte) (source.readByte() & 0xff);
    if (requireSettings && type != TYPE_SETTINGS) {
        throw ioException("Expected a SETTINGS frame but was %s", type);
    }
    byte flags = (byte) (source.readByte() & 0xff);
    int streamId = (source.readInt() & 0x7fffffff); // Ignore reserved bit.
    if (logger.isLoggable(FINE)) logger.fine(frameLog(true, streamId, length, type, flags));
    switch (type) {
        case TYPE_DATA:
            readData(handler, length, flags, streamId);
            break;
        case TYPE_HEADERS:
            readHeaders(handler, length, flags, streamId);
            break;
        case TYPE_PRIORITY:
            readPriority(handler, length, flags, streamId);
            break;
        case TYPE_RST_STREAM:
            readRstStream(handler, length, flags, streamId);
            break;
        case TYPE_SETTINGS:
            readSettings(handler, length, flags, streamId);
            break;
        case TYPE_PUSH_PROMISE:
            readPushPromise(handler, length, flags, streamId);
            break;
        case TYPE_PING:
            readPing(handler, length, flags, streamId);
            break;
        case TYPE_GOAWAY:
            readGoAway(handler, length, flags, streamId);
            break;
        case TYPE_WINDOW_UPDATE:
            readWindowUpdate(handler, length, flags, streamId);
            break;
        default:
            // Implementations MUST discard frames that have unknown or unsupported types.
            source.skip(length);
    }
    return true;
}

可以看到,这个方法是对数据帧的数据进行读取,其中对不同的数据帧的类型进行了判断,并调用了不同的方法读取不同类型的数据。我们看看 nextFrame 又是在哪里调用的。

最后我们找到了 ReaderRunnable.execute 方法:

@Override
protected void execute() {
    ErrorCode connectionErrorCode = ErrorCode.INTERNAL_ERROR;
    ErrorCode streamErrorCode = ErrorCode.INTERNAL_ERROR;
    IOException errorException = null;
    try {
        reader.readConnectionPreface(this);
        while (reader.nextFrame(false, this)) {
        }
        connectionErrorCode = ErrorCode.NO_ERROR;
        streamErrorCode = ErrorCode.CANCEL;
    } catch (IOException e) {
        errorException = e;
        connectionErrorCode = ErrorCode.PROTOCOL_ERROR;
        streamErrorCode = ErrorCode.PROTOCOL_ERROR;
    } finally {
        close(connectionErrorCode, streamErrorCode, errorException);
        Util.closeQuietly(reader);
    }
}

里面在不断地调用 reader.nextFrame 读取下一帧的数据,看来有一个地方开辟了一个线程来不断地对对方的数据进行读取。其启动的时机实际上是 Http2Connection.start 方法:

/**
 * @param sendConnectionPreface true to send connection preface frames. This should always be true
 *                              except for in tests that don t check for a connection preface.
 */
void start(boolean sendConnectionPreface) throws
        IOException {
    if (sendConnectionPreface) {
        writer.connectionPreface();
        writer.settings(okHttpSettings);
        int windowSize = okHttpSettings.getInitialWindowSize();
        if (windowSize != Settings.DEFAULT_INITIAL_WINDOW_SIZE) {
            writer.windowUpdate(0, windowSize - Settings.DEFAULT_INITIAL_WINDOW_SIZE);
        }
    }
    new Thread(readerRunnable).start(); // Not a daemon thread.
}

看来在 Http2 连接启动时,就会创建一个新的线程不断地对数据进行读取,之后再将其分发到不同的 Stream 中,交给对应的请求的响应

这样看来,如果我们收到了 WINDOW_UPDATE 帧,就会通知我们的 HttpConnection 从而增加我们的窗口大小。因此 HTTP/2 的设计更像是一种流的设计,两端不断地从这个流中取出自己需要的数据。

总结

到这里对 OkHttp 的整个流程就分析完成了,对于 HTTP/1.x 的请求,主要是对普通请求、响应及 chunked 特性下的请求、响应进行了不同的处理。

而对 HTTP/2 请求,OkHttp 很好地对 HTTP/2 的流量控制机制进行了支持,通过了窗口大小对写入的数据大小进行了限制,通过阻塞唤醒机制实现了 I/O 任务与数据处理之间的先后调度。在 HTTP/2 连接开启时,会启动一个读取线程不断地从 TCP 连接中读取数据帧,并将其分发到各个 Stream 中。从这些代码里慢慢体会到了 HTTP/2 与 HTTP/1.x 的显著区别,虽然 HTTP 协议都是面向无连接的协议,但 HTTP/2 通过这种多路复用机制实现了一个更复杂但更加有效的应用层协议。

读到这里不禁感叹 OkHttp 的设计真的是十分精妙,就是通过这些细小的细节设计,才造就了这样一个庞大但又易于拓展的网络请求框架,在这个请求的过程中几乎每个细小的点都会将决定权交给用户,极大提高了其扩展性。同时这种拦截器机制的设计也十分出色,用户可以分别在发起请求前后及真正执行 I/O 前后对整个 HTTP 请求过程通过拦截器进行一些处理,但又不影响其他拦截器的正常运行。

虽说看上去整个 OkHttp 的实现原理我们成功进行了剖析,但还有一些小细节等待我们去进行发掘,同时我们还有一个 OkHttp 中所用到的核心库没有进行解析——Okio,如果有兴趣的读者可以期待后续的博文。

参考资料

PUSH_PROMISE帧

理解HTTP/2流量控制(一)