okhttp源码解析(拦截器、设计模式)

3,570 阅读27分钟

前言

在之前的okhttp源码解析(执行流程)的文章中,我们已经对okhttp发起请求的执行流程做了探究。这篇文章将对okhttp中的拦截器和设计模式做一下分析,废话少说,开干!

拦截器

拦截器作用

我们在探究拦截器之前,首先要知道拦截器是什么作用,套用okhttp官网的一句话:

Interceptors are a powerful mechanism that can monitor, rewrite, and retry calls.

将这句话翻译一下:拦截器非常强大的机制,它可以监视、重写、和重连请求。官网也给我们提供了一个列子,是对请求可响应以日志的形式展示。

class LoggingInterceptor implements Interceptor {
  @Override public Response intercept(Interceptor.Chain chain) throws IOException {
    Request request = chain.request();

    long t1 = System.nanoTime();
    logger.info(String.format("Sending request %s on %s%n%s",
        request.url(), chain.connection(), request.headers()));

    Response response = chain.proceed(request);

    long t2 = System.nanoTime();
    logger.info(String.format("Received response for %s in %.1fms%n%s",
        response.request().url(), (t2 - t1) / 1e6d, response.headers()));

    return response;
  }
}

拦截器分类

okhttp官网给出了一张图:

从这张图上我们看到拦截器可以分成两类:Application InterceptorsNetwork Interceptors,看一下官网对这两个拦截器的解释。

Application Interceptors(应用拦截器):
Don’t need to worry about intermediate responses like redirects and retries.
不用担心响应和重定向中间的响应
Are always invoked once, even if the HTTP response is served from the cache.
经常只需要调用一次,使用HTTP响应是通过缓存提供
Observe the application’s original intent. Unconcerned with OkHttp-injected headers like If-None-Match.
准从应用层最初的目的,与OkHttp的注入头部无关,如If-None-Match
Permitted to short-circuit and not call Chain.proceed().
允许短路而且不调用Chain.proceed()
Permitted to retry and make multiple calls to Chain.proceed().
允许重试和多次调用Chain.proceed()

Network Interceptors(网络拦截器):
Able to operate on intermediate responses like redirects and retries.
允许像重定向和重试一样操作中间响应
Not invoked for cached responses that short-circuit the network.
网络发生短路时不调用缓存响应
Observe the data just as it will be transmitted over the network.
在数据被传递到网络时观察数据
Access to the Connection that carries the request.
有权获得装载请求的连接

Interceptor接口

我们从官网给出的示例中我们看到当需要自定义拦截器时,就需要实现Interceptor接口。

public interface Interceptor {
  Response intercept(Chain chain) throws IOException;

  interface Chain {
    Request request();

    Response proceed(Request request) throws IOException;

    @Nullable Connection connection();

    Call call();

    int connectTimeoutMillis();

    Chain withConnectTimeout(int timeout, TimeUnit unit);

    int readTimeoutMillis();

    Chain withReadTimeout(int timeout, TimeUnit unit);

    int writeTimeoutMillis();

    Chain withWriteTimeout(int timeout, TimeUnit unit);
  }
}

在这个接口中有一个方法intercept,所以我们在对其他拦截器进行探究时,可以直接查看intercept方法。

RetryAndFollowUpInterceptor

/**
 * This interceptor recovers from failures and follows redirects as necessary. It may throw an
 * {@link IOException} if the call was canceled.
 */
public final class RetryAndFollowUpInterceptor implements Interceptor {
  /**
   * How many redirects and auth challenges should we attempt? Chrome follows 21 redirects; Firefox,
   * curl, and wget follow 20; Safari follows 16; and HTTP/1.0 recommends 5.
   */
  private static final int MAX_FOLLOW_UPS = 20; // 1、定义了失败重连的次数
  ...
  @Override public Response intercept(Chain chain) throws IOException {
    Request request = chain.request();
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Call call = realChain.call();
    EventListener eventListener = realChain.eventListener();

    // 2、创建一个StreamAllocation对象(这个对象会在后面进行探究,别急)
    StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
        createAddress(request.url()), call, eventListener, callStackTrace);
    this.streamAllocation = streamAllocation;

    int followUpCount = 0;
    Response priorResponse = null;
    while (true) { // 3、开启无限循环
      if (canceled) { // 4、检查请求是否已经取消,如果取消,通过streamAllocation释放请求
        streamAllocation.release();
        throw new IOException("Canceled");
      }
      
      Response response;
      boolean releaseConnection = true;
      try {
        // 5、执行拦截器链,获取响应
        response = realChain.proceed(request, streamAllocation, null, null);
        releaseConnection = false; 
      } catch (RouteException e) {
        // The attempt to connect via a route failed. The request will not have been sent.
        if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
          throw e.getFirstConnectException();
        }
        releaseConnection = false;
        continue;
      } catch (IOException e) {
        // An attempt to communicate with a server failed. The request may have been sent.
        boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
        if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
        releaseConnection = false;
        continue;
      } finally {
        // We're throwing an unchecked exception. Release any resources.
        if (releaseConnection) {
          streamAllocation.streamFailed(null);
          streamAllocation.release();
        }
      }
      
      // 6、关联前一个响应,
      if (priorResponse != null) {
        response = response.newBuilder()
            .priorResponse(priorResponse.newBuilder()
                    .body(null)
                    .build())
            .build();
      }
      
      Request followUp;
      try {
        // 7、这里会根据状态码和请求方法判断是否需要重定向
        followUp = followUpRequest(response, streamAllocation.route());
      } catch (IOException e) {
        streamAllocation.release();
        throw e;
      }
      
      // 8、如果不需要重定向,释放请求,返回响应
      if (followUp == null) {
        streamAllocation.release();
        return response;
      }
      
      closeQuietly(response.body());
      
      // 9、判断重定向次数是否超出范围
      if (++followUpCount > MAX_FOLLOW_UPS) {
        streamAllocation.release();
        throw new ProtocolException("Too many follow-up requests: " + followUpCount);
      }
      
      if (followUp.body() instanceof UnrepeatableRequestBody) {
        streamAllocation.release();
        throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
      }
      
      if (!sameConnection(response, followUp.url())) {
        streamAllocation.release();
        streamAllocation = new StreamAllocation(client.connectionPool(),
            createAddress(followUp.url()), call, eventListener, callStackTrace);
        this.streamAllocation = streamAllocation;
      } else if (streamAllocation.codec() != null) {
        throw new IllegalStateException("Closing the body of " + response
            + " didn't close its backing stream. Bad interceptor?");
      }

      request = followUp;
      priorResponse = response;
    }
  }
}

从注释可以看出这个拦截器主要负责失败重连和重定向,关键性的注释已经在上面代码中给出,我们看一下RetryAndFollowUpInterceptor拦截操作中做了哪几比较重要的操作。

1、创建一个StreamAllocation对象。
2、开启无限循环,调用连接器链中的proceed方法,获取响应。
3、获取响应之后,如果出现路由异常、IO异常,就继续循环获取响应(失败重连)。
4、成功获取响应,释放当前连接。之后调用followUpRequest方法,查看是否需要重定向。
5、如果不需要重定向,直接返回响应,同时释放当前连接。
6、如果需要重定向,判断是否超过重定向最大次数。超过,释放当前请求并抛出异常;没有超过,继续请求。

上述操作中,最主要的是调用followUpRequest方法查看获取重定向信息。

  private Request followUpRequest(Response userResponse, Route route) throws IOException {
    if (userResponse == null) throw new IllegalStateException();
    int responseCode = userResponse.code();

    final String method = userResponse.request().method();
    switch (responseCode) {
      case HTTP_PROXY_AUTH: // 407 需要代理身份验证
        Proxy selectedProxy = route != null
            ? route.proxy()
            : client.proxy();
        if (selectedProxy.type() != Proxy.Type.HTTP) {
          throw new ProtocolException("Received HTTP_PROXY_AUTH (407) code while not using proxy");
        }
        return client.proxyAuthenticator().authenticate(route, userResponse);

      case HTTP_UNAUTHORIZED: // 401 没有授权
        return client.authenticator().authenticate(route, userResponse);
        
      case HTTP_PERM_REDIRECT: // 308 永久重定向
      case HTTP_TEMP_REDIRECT: // 307 临时重定向
        // "If the 307 or 308 status code is received in response to a request other than GET
        // or HEAD, the user agent MUST NOT automatically redirect the request"
        if (!method.equals("GET") && !method.equals("HEAD")) {
          return null;
        }
        // fall-through
      case HTTP_MULT_CHOICE: // 300
      case HTTP_MOVED_PERM: // 301
      case HTTP_MOVED_TEMP: // 302
      case HTTP_SEE_OTHER: // 303
        // Does the client allow redirects?
        if (!client.followRedirects()) return null;

        String location = userResponse.header("Location");
        if (location == null) return null;
        HttpUrl url = userResponse.request().url().resolve(location);

        // Don't follow redirects to unsupported protocols.
        if (url == null) return null;

        // If configured, don't follow redirects between SSL and non-SSL.
        boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme());
        if (!sameScheme && !client.followSslRedirects()) return null;
        
        // Most redirects don't include a request body.
        Request.Builder requestBuilder = userResponse.request().newBuilder();
        if (HttpMethod.permitsRequestBody(method)) {
          final boolean maintainBody = HttpMethod.redirectsWithBody(method);
          if (HttpMethod.redirectsToGet(method)) {
            requestBuilder.method("GET", null);
          } else {
            RequestBody requestBody = maintainBody ? userResponse.request().body() : null;
            requestBuilder.method(method, requestBody);
          }
          if (!maintainBody) {
            requestBuilder.removeHeader("Transfer-Encoding");
            requestBuilder.removeHeader("Content-Length");
            requestBuilder.removeHeader("Content-Type");
          }
        }
        // When redirecting across hosts, drop all authentication headers. This
        // is potentially annoying to the application layer since they have no
        // way to retain them.
        if (!sameConnection(userResponse, url)) {
          requestBuilder.removeHeader("Authorization");
        }
        return requestBuilder.url(url).build();
      case HTTP_CLIENT_TIMEOUT: // 408 请求超时
        // 408's are rare in practice, but some servers like HAProxy use this response code. The
        // spec says that we may repeat the request without modifications. Modern browsers also
        // repeat the request (even non-idempotent ones.)
        if (!client.retryOnConnectionFailure()) {
          // The application layer has directed us not to retry the request.
          return null;
        }
        if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
          return null;
        }
        if (userResponse.priorResponse() != null
            && userResponse.priorResponse().code() == HTTP_CLIENT_TIMEOUT) {
          // We attempted to retry and got another timeout. Give up.
          return null;
        }
        if (retryAfter(userResponse, 0) > 0) {
          return null;
        }
        return userResponse.request();
        
      case HTTP_UNAVAILABLE: // 503 无法获取服务
        if (userResponse.priorResponse() != null
            && userResponse.priorResponse().code() == HTTP_UNAVAILABLE) {
          // We attempted to retry and got another timeout. Give up.
          return null;
        }
        if (retryAfter(userResponse, Integer.MAX_VALUE) == 0) {
          // specifically received an instruction to retry without delay
          return userResponse.request();
        }
        return null;
      default:
        return null;
    }
  }

followUpRequest根据状态码做了以下操作:

1、如果是407或者401,通过创建okhttpClient时传入的proxyAuthenticatorauthenticator去获取鉴权处理,返回处理后的结果。
2、如果是307或者308重定向,如果请求方式不是GET并且也不是HEAD,直接返回null
3、如果是300301302303,并且okhttpClient是允许重定向,就从当前请求头中获取Location,然后重新创建一个Request再次请求。
4、如果是408请求超时,会分别判断能否尝试连接成功、响应体是否是不可重复、之前请求是否超时、是否有延迟时间,如果这些判断都通过,将会再次请求。
5、如果是503无法获取服务,检查上一次请求是否能获取服务,如果可以获取,将会重新请求。

到这里RetryAndFollowUpInterceptor拦截器已经分析完毕。有关StreamAllocation的讲解,各位小伙伴可以参照OKHttp源码解析(九):OKHTTP连接中三个"核心"RealConnection、ConnectionPool、StreamAllocation

BridgeInterceptor

/**
 * Bridges from application code to network code. First it builds a network request from a user
 * request. Then it proceeds to call the network. Finally it builds a user response from the network
 * response.
 */
public final class BridgeInterceptor implements Interceptor {
  private final CookieJar cookieJar;

  public BridgeInterceptor(CookieJar cookieJar) {
    this.cookieJar = cookieJar;
  }
  
  @Override public Response intercept(Chain chain) throws IOException {
    Request userRequest = chain.request();
    Request.Builder requestBuilder = userRequest.newBuilder();

    RequestBody body = userRequest.body();
    if (body != null) { // 1、判断是否有请求体
      MediaType contentType = body.contentType();
      if (contentType != null) {
        requestBuilder.header("Content-Type", contentType.toString());
      }

      long contentLength = body.contentLength();
      if (contentLength != -1) {
        requestBuilder.header("Content-Length", Long.toString(contentLength));
        requestBuilder.removeHeader("Transfer-Encoding");
      } else {
        requestBuilder.header("Transfer-Encoding", "chunked");
        requestBuilder.removeHeader("Content-Length");
      }
    }
    
    if (userRequest.header("Host") == null) { // 添加Host Header
      requestBuilder.header("Host", hostHeader(userRequest.url(), false));
    }

    if (userRequest.header("Connection") == null) { // 添加链接信息Header
      requestBuilder.header("Connection", "Keep-Alive");
    }
    
    // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
    // the transfer stream.
    // 如果在请求投中添加了"Accept-Encoding: gzip"字段,当请求回来时我们还要解压缩传输流
    boolean transparentGzip = false;
    if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
      transparentGzip = true;
      requestBuilder.header("Accept-Encoding", "gzip");
    }
    
    List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
    if (!cookies.isEmpty()) { // 添加cookie信息
      requestBuilder.header("Cookie", cookieHeader(cookies));
    }

    if (userRequest.header("User-Agent") == null) { // 添加User-Agent信息
      requestBuilder.header("User-Agent", Version.userAgent());
    }

    //获取原始响应
    Response networkResponse = chain.proceed(requestBuilder.build());

    HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
    
    // 创建Response.Builder对象
    Response.Builder responseBuilder = networkResponse.newBuilder()
        .request(userRequest);
        
    // 根据transparentGzip进行判断,如果transparentGzip为true表示Accept-Encoding支持gzip压缩。
    // 判断响应的Content-Encoding是否为gzip,确认服务返回的响应体是经过gzip压缩的。
    // 判断响应体是否为空
    if (transparentGzip
        && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
        && HttpHeaders.hasBody(networkResponse)) {
      // 将响响应的body转成GzipSource
      GzipSource responseBody = new GzipSource(networkResponse.body().source());
      Headers strippedHeaders = networkResponse.headers().newBuilder()
          .removeAll("Content-Encoding")
          .removeAll("Content-Length")
          .build();
      responseBuilder.headers(strippedHeaders);
      String contentType = networkResponse.header("Content-Type");
      responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
    }

    return responseBuilder.build();
  }
  ...
}

BridgeInterceptor作为应用和HTTP之间的桥梁,负责将用户创建的请求转成发送到服务器的请求,把服务器的响应转成用户能够识别的响应。
BridgeInterceptor中重要的执行步骤已经在上述源码的注释中给出,我们来总结一下它具体都做了哪些事情。

1、通过添加请求头,将用户请求转成能够进行网络访问的请求。
2、执行拦截器链中的下一个连接器去请求网络。
3、获取请求返回的响应,将响应转成可用的Response。

CacheInterceptor

Http缓存

想要了解okhttp中的缓存,首先我们应该了解一下Http相关的缓存机制。这里有关Http缓存相关的知识不再做过多讲解,小伙伴有兴趣的可以参考前端也要懂Http缓存机制http协议缓存机制

okhttp缓存

老规矩,我们首先还是看一下拦截器内部的intercept方法都是做了哪些事情。

/** Serves requests from the cache and writes responses to the cache. */
public final class CacheInterceptor implements Interceptor {
  final InternalCache cache;

  public CacheInterceptor(InternalCache cache) {
    this.cache = cache;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    //看一下是否有缓存,如果有直接取出
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;
        
    long now = System.currentTimeMillis();
    // 1、获取CacheStrategy(缓存策略对象)
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;
    ...
  }
  ...
}

CacheInterceptor中我们先看这么多代码,其余的等下分析。在注释1处,我们看到会获取一个CacheStrategy对象,这个对象是okhttp的缓存策略对象,在这一步中会调用CacheStrategy.Factory.get()方法,我们看一下这里面究竟做了哪些

/**
 * Given a request and cached response, this figures out whether to use the network, the cache, or
 * both.
 *
 * <p>Selecting a cache strategy may add conditions to the request (like the "If-Modified-Since"
 * header for conditional GETs) or warnings to the cached response (if the cached data is
 * potentially stale).
 */
public final class CacheStrategy {
  // 网络请求对象;如果为null则表示不使用网络。
  public final @Nullable Request networkRequest;

  // 需要返回或者需要验证的缓存响应;如果为null,则表示该请求不使用缓存。
  public final @Nullable Response cacheResponse;
  
  //构造函数会将上面两个字段传入
  CacheStrategy(Request networkRequest, Response cacheResponse) {
    this.networkRequest = networkRequest;
    this.cacheResponse = cacheResponse;
  }
  ...
  public static class Factory {
  
    ...
    public Factory(long nowMillis, Request request, Response cacheResponse){
      this.nowMillis = nowMillis;
      this.request = request;
      this.cacheResponse = cacheResponse;
      
      if (cacheResponse != null) {
        this.sentRequestMillis = cacheResponse.sentRequestAtMillis();
        this.receivedResponseMillis = cacheResponse.receivedResponseAtMillis();
        Headers headers = cacheResponse.headers();
        //设置响应头部
        for (int i = 0, size = headers.size(); i < size; i++) {
          String fieldName = headers.name(i);
           String value = headers.value(i);
          if ("Date".equalsIgnoreCase(fieldName)) {
            servedDate = HttpDate.parse(value);
            servedDateString = value;
          } else if ("Expires".equalsIgnoreCase(fieldName)) {
            expires = HttpDate.parse(value);
          } else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
            lastModified = HttpDate.parse(value);
            lastModifiedString = value;
          } else if ("ETag".equalsIgnoreCase(fieldName)) {
            etag = value;
          } else if ("Age".equalsIgnoreCase(fieldName)) {
            ageSeconds = HttpHeaders.parseSeconds(value, -1);
          }
        }
      }
    }         
    ...
     // 使用缓存的响应来返回与之相对的策略。
    public CacheStrategy get() {
      //获取缓存策略
      CacheStrategy candidate = getCandidate();
      //如果网络请求不为null,并且请求里面的CacheControl字段设置的为只使用缓存
      if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
        // 如果满足上面的条件说明:被禁止使用网络请求,同时缓存不足
        // 这时返回一个网络请求和缓存都为null的策略
        return new CacheStrategy(null, null);
      }

      return candidate;
    }
    ...
    
    // 假设请求可用时,返回的策略
    private CacheStrategy getCandidate() {
    
      // 如果没有缓存,返回网络请求策略
      if (cacheResponse == null) {
        return new CacheStrategy(request, null);
      }
      
      // 如果请求是Https,同时网络请求握手结果缺失
      if (request.isHttps() && cacheResponse.handshake() == null) {
        // 返回网络请求策略
        return new CacheStrategy(request, null);
      }
      
      // 这一步是为了检查响应能否被缓存
      // 如果能够被缓存,则不需要这一步检查
      // 如果不能被缓存,则不能被当作响应返回,这时使用网络请求策略
      if (!isCacheable(cacheResponse, request)) {
        return new CacheStrategy(request, null);
      }
      
      // 拿到请求头中的CacheControl
      CacheControl requestCaching = request.cacheControl();
      // 判断条件1:没有缓存,返回网络请求策略
      // 判断条件2:请求头中存在If-Modified-Since或者If-None-Match字段,说明缓存过期,此时返回网络请求策略
      if (requestCaching.noCache() || hasConditions(request)) {
        return new CacheStrategy(request, null);
      }
      
      // 获取响应中的CacheControl
      CacheControl responseCaching = cacheResponse.cacheControl();
      
      // 获取响应的当前时间
      long ageMillis = cacheResponseAge();
      // 上一次响应刷新的时间
      long freshMillis = computeFreshnessLifetime();
      
      // 如果请求的CacheControl有缓存的最大有效时间(超过这个时间将会被视为缓存过期)
      if (requestCaching.maxAgeSeconds() != -1) {
        // 选择刷新时间和缓存最大有效时间两者之间最小值
        freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
      }

      long minFreshMillis = 0;
      // 如果请求中有最小刷新时间的限制
      // min-fresh:表示客户端希望在指定的时间内获取最新的响应
      if (requestCaching.minFreshSeconds() != -1) {
        // 更新最小刷新时间的限制
        minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
      }
      
      // 最大验证时间
      long maxStaleMillis = 0;
      // 响应中不是必须验证并且请求里面有最大验证时间
      // max-stale:表示客户端愿意接收一个已经过期的资源,可以设置一个时间,表示响应不能超过的过时时间
      if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
        // 更新最大验证时间
        maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
      }

      // 判断条件1:响应有缓存
      // 判断条件2:缓存已经过期
      // 会在响应头中添加Warning信息
      if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
        Response.Builder builder = cacheResponse.newBuilder();
        if (ageMillis + minFreshMillis >= freshMillis) {
          builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
        }
        long oneDayMillis = 24 * 60 * 60 * 1000L;
        if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
          builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
        }
        // 使用响应的缓存策略
        return new CacheStrategy(null, builder.build());
      }
    
      // 缓存过期
      // 在响应头部添加If-None-Match或者If-Modified-Since
      String conditionName;
      String conditionValue;
      if (etag != null) {
        conditionName = "If-None-Match";
        conditionValue = etag;
      } else if (lastModified != null) {
        conditionName = "If-Modified-Since";
        conditionValue = lastModifiedString;
      } else if (servedDate != null) {
        conditionName = "If-Modified-Since";
        conditionValue = servedDateString;
      } else {
        // 使用请求缓存策略
        return new CacheStrategy(request, null); // No condition! Make a regular request.
      }

      Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
      // 添加请求头
      Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);

      // 使用请求和缓存策略
      Request conditionalRequest = request.newBuilder()
          .headers(conditionalRequestHeaders.build())
          .build();
      return new CacheStrategy(conditionalRequest, cacheResponse);
    }
  }
  ...
}

上面就是CacheStrategy的执行过程,我们发现构建缓存策略时,通过传入networkRequestcacheResponse,同时控制这两个对象是否为null来控制缓存策略。我们从上面的代码可以看到一共分为四种情况。

networkRequest cacheResponse 使用策略
null null 不使用网络请求策略和缓存策略
非null 非null 同时使用网络请求策略和缓存策略
非null null 网络请求策略
null 非null 缓存策略

我们已经知道了缓存策略,我们将目光再次投向CacheInterceptor中去。

public final class CacheInterceptor implements Interceptor {
  @Override public Response intercept(Chain chain) throws IOException {
    //获取缓存
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;
    //获取缓存策略    
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    //策略中的请求对象
    Request networkRequest = strategy.networkRequest;
    //策略中的响应对象
    Response cacheResponse = strategy.cacheResponse;   
    
    //如果缓存不为空,跟踪缓存
    if (cache != null) {
      cache.trackResponse(strategy);
    }
    ...
    //不使用网络请求策略和缓存策略,直接返回响应码为504响应
    //这里直接返回响应,不会将请求传递给下一个拦截器
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }
    
    //没有网络请求,返回缓存策略
    //从缓存中直接拿去数据,请求也不会传递到下一个拦截器
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }
    
    //有网络请求
    Response networkResponse = null;
    try {
      //将请求交个下一个拦截器,获取响应结果
      networkResponse = chain.proceed(networkRequest);
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }
    
    //如果有缓存
    if (cacheResponse != null) {
      //服务器响应码为304,说明缓存有效,合并网络响应和缓存结果
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();
        
        //更新缓存
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }
    
    //代码执行到这里,说明缓存中没有数据
    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();
    //如果创建OkhttpClient对象时,设置了缓存
    if (cache != null) {
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        //将请求结果缓存到本地
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);
      }

      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }
    //直接返回响应结果
    return response;
}

根据CacheInterceptor中的代码,我们可以将之前的表格更新一下。

networkRequest cacheResponse 使用策略 对应结果
null null 不使用网络请求策略和缓存策略 返回状态码为503的错误响应
非null 非null 同时使用网络请求策略和缓存策略 需要请求验证是否使用缓存
非null null 网络请求策略 进行网络请求
null 非null 缓存策略 不使用网络请求,直接使用缓存

看到这里之后我们需要看一下okhttp中的缓存算法,在分析的过程中我们一直都在围绕CacheInterceptor中的cache对象进行,这个对象控制了缓存的读取和写入,同时这个对象是InternalCache类,我们看一下它是怎么执行的。

/**
 * OkHttp's internal cache interface. Applications shouldn't implement this: instead use {@link
 * okhttp3.Cache}.
 */
public interface InternalCache {
  Response get(Request request) throws IOException;
  CacheRequest put(Response response) throws IOException;
  void remove(Request request) throws IOException;
  void update(Response cached, Response network);
  void trackConditionalCacheHit();
  void trackResponse(CacheStrategy cacheStrategy);
}

从这个类我们也可以看到这是okhttp的缓存接口,直接去使用okhttp3.Cache类。中这个类中我们看到里面有关缓存的读取、写入和清除等操作,我们去看一下okhttp3.Cache

public final class Cache implements Closeable, Flushable {
  final InternalCache internalCache = new InternalCache() {
    @Override public Response get(Request request) throws IOException {
      return Cache.this.get(request);
    }

    @Override public CacheRequest put(Response response) throws IOException {
      return Cache.this.put(response);
    }

    @Override public void remove(Request request) throws IOException {
      Cache.this.remove(request);
    }

    @Override public void update(Response cached, Response network) {
      Cache.this.update(cached, network);
    }

    @Override public void trackConditionalCacheHit() {
      Cache.this.trackConditionalCacheHit();
    }

    @Override public void trackResponse(CacheStrategy cacheStrategy) {
      Cache.this.trackResponse(cacheStrategy);
    }
  };
  ...
  Cache(File directory, long maxSize, FileSystem fileSystem) {
    this.cache = DiskLruCache.create(fileSystem, directory, VERSION, ENTRY_COUNT, maxSize);
  }
  ...
    @Nullable Response get(Request request) {
    String key = key(request.url());
    DiskLruCache.Snapshot snapshot;
    Entry entry;
    try {
      snapshot = cache.get(key);
      if (snapshot == null) {
        return null;
      }
    } catch (IOException e) {
      // Give up because the cache cannot be read.
      return null;
    }

    try {
      entry = new Entry(snapshot.getSource(ENTRY_METADATA));
    } catch (IOException e) {
      Util.closeQuietly(snapshot);
      return null;
    }

    Response response = entry.response(snapshot);

    if (!entry.matches(request, response)) {
      Util.closeQuietly(response.body());
      return null;
    }

    return response;
  }
  @Nullable CacheRequest put(Response response) {
    String requestMethod = response.request().method();

    if (HttpMethod.invalidatesCache(response.request().method())) {
      try {
        remove(response.request());
      } catch (IOException ignored) {
        // The cache cannot be written.
      }
      return null;
    }
    if (!requestMethod.equals("GET")) {
      // Don't cache non-GET responses. We're technically allowed to cache
      // HEAD requests and some POST requests, but the complexity of doing
      // so is high and the benefit is low.
      return null;
    }

    if (HttpHeaders.hasVaryAll(response)) {
      return null;
    }
    Entry entry = new Entry(response);
    DiskLruCache.Editor editor = null;
    try {
      editor = cache.edit(key(response.request().url()));
      if (editor == null) {
        return null;
      }
      entry.writeTo(editor);
      return new CacheRequestImpl(editor);
    } catch (IOException e) {
      abortQuietly(editor);
      return null;
    }
  }
  ...
}

我们看到Cache中创建了一个InternalCache对象,当执行get操作时又会调用Cache自己的get方法。同时,在Cache的构造方法中创建了一个DiskLruCache对象,Cache中所有对缓存的操作都是DiskLruCache对象进行,所以okhttp中的缓存算法是通过DiskLruCache进行的。
由于篇幅关系,这里就对DiskLruCache进行过多探究,有兴趣的小伙伴可以参照Android开源框架源码鉴赏:LruCache与DiskLruCache

小结

总结一下缓存拦截器的执行步骤吧:

1、从本地磁盘获取缓存数据
2、通过CacheStrategy中工厂类的get方法获取缓存策略
3、如果不使用网络请求同时也不使用缓存策略,返回响应码为504的错误。
4、使用网络请求不使用缓存策略,执行拦截链中的下一个拦截器,获取响应并返回
5、如果使用缓存策略但是不使用网络请求策略,直接使用缓存数据
6、既使用网络请求同时也使用缓存策略,会根据服务器返回状态码进行判断:如果状态码为304,说明缓存有效,直接从缓存中获取数据,同时刷新缓存;如果状态码为200,说明缓存已经不能使用,直接使用网络请求数据同时刷新缓存。
7、创建OkhttpClient对象时如果指定了本地缓存路径,则会将缓存存入到本地磁盘中。

ConnectInterceptor

ConnectInterceptor这个拦截器为网络连接拦截,看一下它的作用。

/** Opens a connection to the target server and proceeds to the next interceptor. */
public final class ConnectInterceptor implements Interceptor {
  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    // 1、获取RealInterceptorChain中的StreamAllocation对象。
    StreamAllocation streamAllocation = realChain.streamAllocation();

    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    // 2、获取HttpCodec对象
    HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
    // 3、通过streamAllocation获取RealConnection
    RealConnection connection = streamAllocation.connection();
    // 4、执行下一个拦截器
    return realChain.proceed(request, streamAllocation, httpCodec, connection);
  }
}

这个拦截器主要用于对服务创建连接,同时去执行下一个拦截器。它主要的执行步骤分为4步:

1、获取RealInterceptorChain传递过来的StreamAllocation对象。
2、通过调用streamAllocation.newStream,获取HttpCodec对象,这个对象主要作用是加密请求和解密响应。
3、通过调用streamAllocation.connection,获取RealConnection对象,这个对象是用于实际的IO传输。
4、执行下一个拦截器。

这里我们还要着重看一下步骤2和步骤3处,先看一下步骤2。

// StreamAllocation -> newStream
  public HttpCodec newStream(
    OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
    int connectTimeout = chain.connectTimeoutMillis();
    int readTimeout = chain.readTimeoutMillis();
    int writeTimeout = chain.writeTimeoutMillis();
    int pingIntervalMillis = client.pingIntervalMillis();
    boolean connectionRetryEnabled = client.retryOnConnectionFailure();
    try {
      RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
          writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks); // 标记1
      HttpCodec resultCodec = resultConnection.newCodec(client, chain, this); // 标记2
      ...
    } 
    ...
  }

StreamAllocation.newStream方法,这个方法中获取了请求、读取、写入超时时间,同时还获取了client中的配置。最重要的调用了findHealthyConnection方法,从字面上看,这个方法时用来获取一个健壮的连接。

  // StreamAllocation -> findHealthyConnection
  private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
      int writeTimeout, int pingIntervalMillis, boolean connectionRetryEnabled,
      boolean doExtensiveHealthChecks) throws IOException {
    while (true) {
      RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
          pingIntervalMillis, connectionRetryEnabled);
      // 如果是一个全新的连接,跳过检查
      synchronized (connectionPool) {
        if (candidate.successCount == 0) {
          return candidate;
        }
      }
      
      // 确定连接是否可用,如果不可用,移出连接池并且重新查找可用连接
      if (!candidate.isHealthy(doExtensiveHealthChecks)) {
        noNewStreams();
        continue;
      }
      return candidate;
    }
  }

这个方法中通过一个无限循环调用findConnection方法,看一下吧。

  /**
   * Returns a connection to host a new stream. This prefers the existing connection if it exists,
   * then the pool, finally building a new connection.
   */
  // 返回RealConnection对象,如果当前存在,直接返回;反之会从连接池中查找,如果还没有的话会返回一个全新的连接。
  private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
      int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
    boolean foundPooledConnection = false;
    RealConnection result = null;
    Route selectedRoute = null;
    Connection releasedConnection;
    Socket toClose;
    synchronized (connectionPool) {
      if (released) throw new IllegalStateException("released");
      if (codec != null) throw new IllegalStateException("codec != null");
      if (canceled) throw new IOException("Canceled");
      // 尝试获取一个已经存在的连接,这里需要注意,因为已经存在的连接有可能已经被限制创建新流。
      releasedConnection = this.connection;
      toClose = releaseIfNoNewStreams();
      //如果已经存在连接
      if (this.connection != null) {
        result = this.connection;
        releasedConnection = null;
      }
      ...
      // 如果没有已经存在的连接。
      if (result == null) {
        // 尝试从连接池中获取
        Internal.instance.get(connectionPool, address, this, null);
        // 从连接池中获取到连接对象
        if (connection != null) {
          foundPooledConnection = true;
          result = connection;
        } else {
          selectedRoute = route;
        }
      }
    }
    closeQuietly(toClose);
    if (releasedConnection != null) {
      eventListener.connectionReleased(call, releasedConnection);
    }
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
    }
    // 如果发现已经存在、或者从连接池中获取一个连接,直接返回
    if (result != null) {
      return result;
    }
    ...
    // 实在找不到,我们就创建一个
    synchronized (connectionPool) {
      if (canceled) throw new IOException("Canceled");

      if (newRouteSelection) {
        // 获取路由集合,并缓存
        List<Route> routes = routeSelection.getAll();
        for (int i = 0, size = routes.size(); i < size; i++) {
          Route route = routes.get(i);
          Internal.instance.get(connectionPool, address, this, route);
          if (connection != null) {
            foundPooledConnection = true;
            result = connection;
            this.route = route;
            break;
          }
        }
      }
      if (!foundPooledConnection) {
        if (selectedRoute == null) {
          selectedRoute = routeSelection.next();
        }
        // 如果缓存中没找到连接,直接创建一个新的
        route = selectedRoute;
        refusedStreamCount = 0;
        result = new RealConnection(connectionPool, selectedRoute);
        acquire(result, false);
      }
    }
    // If we found a pooled connection on the 2nd time around, we're done.
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
      return result;
    }

    // Do TCP + TLS handshakes. This is a blocking operation.
    // http三次握手操作
    result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
        connectionRetryEnabled, call, eventListener);
    routeDatabase().connected(result.route());
    Socket socket = null;
    synchronized (connectionPool) {
      reportedAcquired = true;
      // 将新连接放入连接池中
      Internal.instance.put(connectionPool, result);
      ...
    }
    ...
    return result;
  }

这一步获是为了获取一个连接,在获取连接之后又调用了newCodec方法,也就是上面的标记2处。

  // RealConnection -> newCodec
  public HttpCodec newCodec(OkHttpClient client, Interceptor.Chain chain,
      StreamAllocation streamAllocation) throws SocketException {
    if (http2Connection != null) {
      return new Http2Codec(client, chain, streamAllocation, http2Connection);
    } else {
      socket.setSoTimeout(chain.readTimeoutMillis());
      source.timeout().timeout(chain.readTimeoutMillis(), MILLISECONDS);
      sink.timeout().timeout(chain.writeTimeoutMillis(), MILLISECONDS);
      return new Http1Codec(client, streamAllocation, source, sink);
    }
  }

这里查看HTTP版本,因为HTTP1.1HTTP2之间存在差异。
最后我们看一下获取连接之后,内部是怎样做网络连接的。

  // RealConnection -> connect
  public void connect(int connectTimeout, int readTimeout, int writeTimeout,
      int pingIntervalMillis, boolean connectionRetryEnabled, Call call,
      EventListener eventListener) {
    ...
    while (true) {
      try {
        ...
        establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener);
        ...
      }
    }
    ...
  }
  
  // RealConnection -> establishProtocol
  private void establishProtocol(ConnectionSpecSelector connectionSpecSelector,
      int pingIntervalMillis, Call call, EventListener eventListener) throws IOException {
    ...
    eventListener.secureConnectStart(call);
    connectTls(connectionSpecSelector);
    eventListener.secureConnectEnd(call, handshake);
    ...
  }
  
  // RealConnection -> connectTls
  private void connectTls(ConnectionSpecSelector connectionSpecSelector) throws IOException {
    Address address = route.address();
    SSLSocketFactory sslSocketFactory = address.sslSocketFactory();
    boolean success = false;
    SSLSocket sslSocket = null;
    try {
      // Create the wrapper over the connected socket.
      sslSocket = (SSLSocket) sslSocketFactory.createSocket(
          rawSocket, address.url().host(), address.url().port(), true /* autoClose */);
      // Configure the socket's ciphers, TLS versions, and extensions.
      ConnectionSpec connectionSpec = connectionSpecSelector.configureSecureSocket(sslSocket);
      if (connectionSpec.supportsTlsExtensions()) {
        Platform.get().configureTlsExtensions(
            sslSocket, address.url().host(), address.protocols());
      }

      // Force handshake. This can throw!
      sslSocket.startHandshake();
      // block for session establishment
      SSLSession sslSocketSession = sslSocket.getSession();
      Handshake unverifiedHandshake = Handshake.get(sslSocketSession);
      // Verify that the socket's certificates are acceptable for the target host.
      if (!address.hostnameVerifier().verify(address.url().host(), sslSocketSession)) {
        X509Certificate cert = (X509Certificate) unverifiedHandshake.peerCertificates().get(0);
        throw new SSLPeerUnverifiedException("Hostname " + address.url().host() + " not verified:"
            + "\n    certificate: " + CertificatePinner.pin(cert)
            + "\n    DN: " + cert.getSubjectDN().getName()
            + "\n    subjectAltNames: " + OkHostnameVerifier.allSubjectAltNames(cert));
      }
      // Success! Save the handshake and the ALPN protocol.
      String maybeProtocol = connectionSpec.supportsTlsExtensions()
          ? Platform.get().getSelectedProtocol(sslSocket)
          : null;
      socket = sslSocket;
      source = Okio.buffer(Okio.source(socket));
      sink = Okio.buffer(Okio.sink(socket));
      handshake = unverifiedHandshake;
      protocol = maybeProtocol != null
          ? Protocol.get(maybeProtocol)
          : Protocol.HTTP_1_1;
      success = true;
    } catch (AssertionError e) {
      if (Util.isAndroidGetsocknameError(e)) throw new IOException(e);
      throw e;
    } finally {
      if (sslSocket != null) {
        Platform.get().afterHandshake(sslSocket);
      }
      if (!success) {
        closeQuietly(sslSocket);
      }
    }
  }

创建连接之后会用它进行网络连接,最后会调用RealConnection.connectTls方法,从这个方法中我们可以得出一个结论:okhttp是基于原生SocketOKio

CallServerInterceptor

/** This is the last interceptor in the chain. It makes a network call to the server. */
// 最后一个拦截器,连接服务
public final class CallServerInterceptor implements Interceptor {
  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    HttpCodec httpCodec = realChain.httpStream();
    StreamAllocation streamAllocation = realChain.streamAllocation();
    RealConnection connection = (RealConnection) realChain.connection();
    Request request = realChain.request();
    long sentRequestMillis = System.currentTimeMillis();
    realChain.eventListener().requestHeadersStart(realChain.call());
    // 写入请求头
    httpCodec.writeRequestHeaders(request);
    realChain.eventListener().requestHeadersEnd(realChain.call(), request);

    Response.Builder responseBuilder = null;
    // 判断请求方法和请求体
    if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
      if ("100-continue".equalsIgnoreCase(request.header("Expect"))) 
      // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
      // Continue" response before transmitting the request body. If we don't get that, return
      // what we did get (such as a 4xx response) without ever transmitting the request body.
        httpCodec.flushRequest();
        realChain.eventListener().responseHeadersStart(realChain.call());
        responseBuilder = httpCodec.readResponseHeaders(true);
      }
      
      if (responseBuilder == null) {
        // Write the request body if the "Expect: 100-continue" expectation was met.
        realChain.eventListener().requestBodyStart(realChain.call());
        long contentLength = request.body().contentLength();
        // 写入请求体
        CountingSink requestBodyOut =
            new CountingSink(httpCodec.createRequestBody(request, contentLength));
        BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);

        request.body().writeTo(bufferedRequestBody);
        bufferedRequestBody.close();
        realChain.eventListener()
            .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
      } else if (!connection.isMultiplexed()) {
        // If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
        // from being reused. Otherwise we're still obligated to transmit the request body to
        // leave the connection in a consistent state.
        streamAllocation.noNewStreams();
      }
    }
    
    httpCodec.finishRequest();

    if (responseBuilder == null) {
      realChain.eventListener().responseHeadersStart(realChain.call());
      // 获取响应头
      responseBuilder = httpCodec.readResponseHeaders(false);
    }

    Response response = responseBuilder
        .request(request)
        .handshake(streamAllocation.connection().handshake())
        .sentRequestAtMillis(sentRequestMillis)
        .receivedResponseAtMillis(System.currentTimeMillis())
        .build();
        
    int code = response.code();
    if (code == 100) {
      // server sent a 100-continue even though we did not request one.
      // try again to read the actual response
      responseBuilder = httpCodec.readResponseHeaders(false);

      response = responseBuilder
              .request(request)
              .handshake(streamAllocation.connection().handshake())
              .sentRequestAtMillis(sentRequestMillis)
              .receivedResponseAtMillis(System.currentTimeMillis())
              .build();

      code = response.code();
    }
    
    realChain.eventListener()
            .responseHeadersEnd(realChain.call(), response);
    // 获取响应体
    if (forWebSocket && code == 101) {
      // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
      response = response.newBuilder()
          .body(Util.EMPTY_RESPONSE)
          .build();
    } else {
      response = response.newBuilder()
          .body(httpCodec.openResponseBody(response))
          .build();
    }
    
    if ("close".equalsIgnoreCase(response.request().header("Connection"))
        || "close".equalsIgnoreCase(response.header("Connection"))) {
      streamAllocation.noNewStreams();
    }

    if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
      throw new ProtocolException(
          "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
    }
    // 返回响应
    return response;
}

这个拦截器是拦截器链的最后一个,其主要是连接服务获取响应,将响应返回到上一层拦截器中。

自定义拦截器

我们从文章开头知道,拦截器分为两种:一种是应用拦截器,另外一种是网络拦截器。那么我们平时开发的过程中,如何自定义拦截器呢?创建出拦截器之后如果插入到拦截器链中呢?

创建拦截器

这里创建拦截器我们还是使用官网给出的例子。

class LoggingInterceptor implements Interceptor {
    @Override public Response intercept(Interceptor.Chain chain) throws IOException {
        Request request = chain.request();
        long t1 = System.nanoTime();
        Log.i("LoggingInterceptor", String.format("Sending request %s on %s%n%s",
                request.url(), chain.connection(), request.headers()));

        Response response = chain.proceed(request);

        long t2 = System.nanoTime();
        Log.i("LoggingInterceptor",String.format("Received response for %s in %.1fms%n%s",
                response.request().url(), (t2 - t1) / 1e6d, response.headers()));
        return response;
    }
}

添加拦截器

关于添加拦截器需要我们注意一下,因为网络拦截器和应用拦截器插入的位置不同,并且插入的方法也不同。插入应用拦截器时,我们需要调用addInterceptor方法;插入网络拦截器时,我们需要调用addNetworkInterceptor方法。

添加应用拦截器

OkHttpClient okHttpClient = new OkHttpClient.Builder()
                .addInterceptor(new LoggingInterceptor())
                .build();

添加网络拦截器

OkHttpClient okHttpClient = new OkHttpClient.Builder()
                .addNetworkInterceptor(new LoggingInterceptor())
                .build();

小结

至此,我们已经将okhttp中的拦截器探究完毕,可以用一张图来作为总结。

okhttp中的设计模式

在探究okhttp中的设计模式时,这里会找出对应设计模式在okhttp中的代码,有关对应的设计模式不做深入研究,会给出相关文章供小伙伴探究。

Builder

这个模式是我们在okhttp中第一个遇到的设计模式,在创建OkHttpClientRequest都用到了这个模式,当然在创建其他对象时也用到了这个设计模式。下面是创建OkHttpClient时的代码。

public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
  OkHttpClient(Builder builder) {
    this.dispatcher = builder.dispatcher;
    ...
  }
  public static final class Builder {
    ...
  }
}

相关文章

卖热干面的启发 ---Builder 模式

Factory Method

有关工厂方法设计模式在okhttp中的应用不是很多,一个是OkHttpClient类实现的WebSocket.Factory接口,还有一个就是Call接口。

public interface WebSocket {
  ...
  interface Factory {
    WebSocket newWebSocket(Request request, WebSocketListener listener);
  }
}
public interface Call extends Cloneable {
  ...
  interface Factory {
    Call newCall(Request request);
  }
}

相关文章

设计模式系列之「工厂方法模式」

Observer

观察者设计模式在okhttp中主要有两个地方被用到,分别是EventListenerWebSocketListener,这两个都是对生命周期的监听。

public abstract class EventListener {
    ...
    public void callStart(Call call)
    public void connectStart(Call call, InetSocketAddress inetSocketAddress, Proxy proxy)
    ...
}
public abstract class WebSocketListener {
  ...
  public void onOpen(WebSocket webSocket, Response response)
  public void onMessage(WebSocket webSocket, String text)
  ...
}

相关文章

设计模式系列之「观察者模式」

Singleton

单例设计模式因该是我们最为熟知的设计模式,在Platform类中可以看到其身影。单例设计模式的写法有很多,但是我们注意一下为什么Platfrom会使用这种写法。

public class Platform {
  private static final Platform PLATFORM = findPlatform();
  ...
  public static Platform get() {
    return PLATFORM;
  }
  private static Platform findPlatform() {
    Platform android = AndroidPlatform.buildIfSupported();

    if (android != null) {
      return android;
    }

    if (isConscryptPreferred()) {
      Platform conscrypt = ConscryptPlatform.buildIfSupported();

      if (conscrypt != null) {
        return conscrypt;
      }
    }

    Platform jdk9 = Jdk9Platform.buildIfSupported();

    if (jdk9 != null) {
      return jdk9;
    }

    Platform jdkWithJettyBoot = JdkWithJettyBootPlatform.buildIfSupported();

    if (jdkWithJettyBoot != null) {
      return jdkWithJettyBoot;
    }

    // Probably an Oracle JDK like OpenJDK.
    return new Platform();
  }
  ...
}

相关文章

Java设计模式—单例设计模式(Singleton Pattern)完全解析

Strategy

策略设计模式在okhttp中的CookieJar中能够找到。

public interface CookieJar {
  CookieJar NO_COOKIES = new CookieJar() {
    @Override public void saveFromResponse(HttpUrl url, List<Cookie> cookies) {
    }

    @Override public List<Cookie> loadForRequest(HttpUrl url) {
      return Collections.emptyList();
    }
  };
  ...
}

相关文章

LOL设计模式之「策略模式」

Chain of Responsibility

责任链设计模式,这个模式属于okhttp中的核心设计模式,具体的就不再赘述。但是,我们可以想一下这个设计模式在Android其他的机制中是否有用到过。

相关文章

我的Java设计模式-责任链模式

总结

到这里关于okhttp的拦截器和设计模式就算探究完了,在这个过程中我发现,自己对设计模式这一块还不是很熟,所以在写okhttp的设计模式相关的知识点时不是很详细,希望各位不要见怪,设计模式这块知识也是后面要着重研究的方向之一。还是那句话,文章有什么不妥之处,请各位大佬及时指出,本人将不胜感激。

参考资料

okhttp官网
OKHttp源码解析(九):OKHTTP连接中三个"核心"RealConnection、ConnectionPool、StreamAllocation
前端也要懂Http缓存机制
Android开源框架源码鉴赏:LruCache与DiskLruCache
卖热干面的启发 ---Builder 模式
设计模式系列之「工厂方法模式」
Java设计模式—单例设计模式(Singleton Pattern)完全解析
LOL设计模式之「策略模式」
我的Java设计模式-责任链模式