• Introduction
  • Why Apache Software Foundation
  • HTTP Proxy and Caching
  • Traffic Server Under The Hood
  • Asynchronous Event Processing
  • Multi-Threading
  • Why make it twice as complicated
  • Apache Traffic Server




    Download 247 Kb.
    bet1/3
    Sana17.09.2020
    Hajmi247 Kb.
    #11402
      1   2   3

    Apache Traffic Server

    HTTP Proxy Server on the Edge



    Leif Hedstrom

    Apache Traffic Server Development Team

    Yahoo! Cloud Computing
    leif@yahoo-inc.com

    zwoop@apache.org





    Abstract — Apache Traffic Server[1] is a fast, scalable and feature-rich HTTP proxy and caching server. Traffic Server was originally a commercial product from Inktomi Corporation, and has been actively used inside Yahoo! for many years, as well as by many other large web sites. As of 2009, Traffic Server is an Open Source project under the Apache umbrella, and is rapidly being developed and improved upon by an active community.

    This talk will explain the details behind the Traffic Server technology - What is it? What makes it fast? Why is it scalable? And how is it different compared to other HTTP proxy servers? We will also delve into details about how a large web site can utilize this power to create services with exceptional end-user experience.
    1. Introduction


    Apache Traffic Server is an Open Source project, originally developed as a commercial product by Inktomi, and later donated to the Apache Software Foundation (ASF) by Yahoo! Inc. Apache Traffic Server was accepted as a Top-Level Project in April of 2010, after 6 months of incubation. Graduating as a TLP is a milestone not only for the community, but also shows ASF’s commitment to Traffic Server, as well as by all the contributors.

    Yahoo! has actively used the original Traffic Server software for many years, serving HTTP requests for many types of applications:



    • As a Content Delivery Network (CDN), serving static content for all of Yahoo’s web sites

    • For connection management across long distances, and providing low-latency connectivity to the users

    • As an alternative to Hardware Server Load Balancers (SLBs)

    As such, TS already is (and has been for several years) a critical component of Yahoos! network. By releasing Traffic Server to the Open Source Community, a new tool is now readily available for anyone to use.
      1. Why Apache Software Foundation


    This presentation does not focus on Yahoo!’s decision to open-source Traffic Server, and the choices that were made during the process. However, it’s useful to understand why Yahoo! chose ASF, and what benefits we derive from being an ASF Top-Level Project.

    Being part of an already established and well-functioning Open Source community brings immediate benefits to the project:



    • We benefit from the many years of experience of ASF leadership in Open Source technology.

    • We immediately gained new contributors to the project.

    • There is plenty of existing source code, skills and experiences in the ASF community, into which we can tap.

    • We are part of a reputable and well-maintained Open Source community.
    1. HTTP Proxy and Caching


    HTTP proxy servers, with or without caching, are implementations of an HTTP server with support to act as an intermediary between a client (User-Agent), and another HTTP server (typically referred to as an Origin Server). It’s quite possible, and in many cases desirable, to have multiple intermediaries in a hierarchy, and many ISPs will proxy all HTTP requests through a mandatory intermediary.

    There are three primary configurations for a proxy server:



    • Forward Proxy – This is the traditional proxy setup, typically used in corporate firewalls or by ISPs. It requires the User-Agents (e.g. browsers) to be configured and aware of the proxy server.

    • Reverse Proxy – In a reverse proxy setup, the intermediary acts as any normal HTTP server would, but will proxy requests based on (typically) a specific mapping rule.

    • Intercepting Proxy – This is similar to Forward Proxy, except the intermediary intercepts the HTTP requests from the User-Agent. This is also typically done by ISPs or corporate firewalls, but has the advantage that it is transparent to the user. This usually is also referred to as Transparent Proxy.

    Any HTTP intermediary must of course function as a basic HTTP web server. There is definite overlap in functionality between a proxy server and a regular HTTP server. Both typically provide support for access control (ACLs), SSL termination and IPv6. In addition, many HTTP intermediaries also provides features such as:

    • Based on the incoming request, finding the most appropriate Origin Server (or another intermediary) from which to fetch the document;

    • Providing infrastructure to build redundant and resilient HTTP services;

    • Cache documents locally, for faster access and less load on Origin Servers;

    • Server Load Balancing (SLB), by providing features such as sticky sessions, URL-based routing, etc.

    • Implementing various Edge services, such as Edge Side Includes (ESI);

    • Acting as a firewall for access to HTTP content: providing content filtering, anti-spam filtering, audit logs, etc.

    Traffic Server can perform many of these tasks, but obviously not all of them. Some tasks would require changes to the internals of the code; and some would require development of plugins. Fortunately, Traffic Server, similar to Apache HTTPD, has a feature-rich plugin API to develop extensions. Efforts are being made to not only release a number of useful plugins to the Open Source community, but we also aim to improve and extend the plugin APIs to allow for even more complex development. We are also starting to see the community contribute new Traffic Server plugins.
    1. Traffic Server Under The Hood


    Apache Traffic Server differs from most existing Open Source proxy servers. It combines two technologies commonly used for writing applications that deal with high concurrency:

    1. Asynchronous event processing

    2. Multi-threading

    By combining these two technologies, TS can draw the benefits from each. However, it also makes the technology and code complex, and sometimes difficult to understand. This is a serious drawback, but we feel the positives outweigh the negatives. Before we discuss the pros and the cons of this decision, we’ll give a brief introduction to these two concepts.
      1. Asynchronous Event Processing


    This is actually a combination of two concepts:

    1. An event loop

    2. Asynchronous I/O

    Together, this gives us what we call Asynchronous Event processing. The event loop will schedule event handlers to be executed as the events trigger. The asynchronous requirement means that such handlers are not allowed to block execution waiting for I/O (or block for any other reason). Instead of blocking, the event handler must yield execution, and inform the event loop that it should continue execution when the task would not block. Events are also automatically generated, and dispatched appropriately, as sockets and other file descriptors change state and become ready for reading or writing (or possibly both).

    It is important to understand that an event loop model does not necessarily require all I/O to be asynchronous. However, in the Traffic Server case, this is a fundamental design requirement, and it impacts not only how the core code is written, but also how you implement plugins. A plugin cannot block on any I/O calls, as doing so would prevent the asynchronous event processor (scheduler) from functioning properly.


      1. Multi-Threading


    Different Operating Systems implement multi-threading in different ways, but they are generally a mechanism to allow a process to split itself into two or more concurrently running tasks. These tasks (threads) all exist within the same context of a single process. A fundamental difference between creating a thread and creating a new process is that threads are allowed to share resources not (commonly) shared between separate processes. As a side note, it is typically much less expensive for an OS to switch execution between threads than between processes.

    Threading is a simpler abstraction of concurrency than the asynchronous event processing, but every OS has limitations on how many threads it can handle. Even though switching threads is lightweight, it still has overhead and consumes CPU. Threads also consume some additional memory, of course, although typically not as much as individual processes will.


      1. Why make it twice as complicated?


    Now that we have a basic understanding of what these concurrency mechanisms provide, let’s discuss why Traffic Server decided to use both. This is an important discussion because it will help you decide which HTTP intermediary solutions you should choose.

    Multi-threading is a popular paradigm for solving concurrency issues because it is a well-understood and proven technology. It is also well-supported on most modern Operating Systems. It solves the concurrency problem well, but it does have a few problems and concerns, such as:



    • Writing multi-threaded applications is difficult, particularly if the application is to take advantage of shared memory. Lock contention, deadlocks, priority inversion and race conditions are some of the difficulties with which developers will need to confront.

    • Even though threads are lightweight, they still incur context switches in the Operating System. Each thread also requires its own “private” data, particularly on the stack. As such, the more threads you have, the more context switches you will see, and memory consumption will increase linearly as the number of threads increases.

    It generally is easier to program for asynchronous event loops, and there are many abstractions and libraries available that provide good APIs. Some examples include libevent[2] and libev[3] for C and C++ developers. (There are also bindings for many higher-level languages for both these libraries, and others.) Of course, there are a few limitations with event loops:

    • The event loop (and handlers) typically only supports running on a single CPU.

    • If the event loop needs to deal with a large number of events, increased latency can occur before an event is processed (by the nature of the events being queued).

    • To avoid blocking the event loop, all I/O needs to be asynchronous. This makes it slightly more difficult for programmers, particularly when integrating existing libraries (which may be synchronous by nature).

    Traffic Server decided to combine both of these techniques, thus eliminating many of the issues and limitations associated with each of them. In Traffic Server, there are a small number of “worker threads”; each such worker thread is running its own asynchronous event processor. In a typical setup, this means Traffic Server will run with around 20-40 threads only. This is configurable, but increasing the number of threads above the default (which is 3 threads per CPU core) will yield worse performance due to the overhead caused by more threads.



    1. Traffic Server Thread Model

    Our solution does not solve all the problems related to concurrent processing, but it makes it a lot better, and certainly very scalable. Care has been taken to provide flexible APIs so that plugin developers can write thread-safe and non-blocking code.


    Download 247 Kb.
      1   2   3




    Download 247 Kb.