varnishd(1)

NAME

   varnishd - HTTP accelerator daemon

SYNOPSIS

   varnishd  [-a  address[:port][,PROTO]]  [-b host[:port]] [-C] [-d] [-F]
   [-f config] [-h type[,options]] [-i identity]  [-j  jail[,jailoptions]]
   [-l  vsl[,vsm]]  [-M address:port] [-n name] [-P file] [-p param=value]
   [-r param[,param...]] [-S secret-file] [-s  [name=]kind[,options]]  [-T
   address[:port]] [-t TTL] [-V] [-W waiter]

DESCRIPTION

   The  varnishd daemon accepts HTTP requests from clients, passes them on
   to a backend server and caches the returned documents to better satisfy
   future requests for the same document.

OPTIONS

   -a <address[:port][,PROTO]>
          Listen  for  client  requests on the specified address and port.
          The  address  can  be  a  host  name  ("localhost"),   an   IPv4
          dotted-quad ("127.0.0.1"), or an IPv6 address enclosed in square
          brackets ("[::1]"). If address is not specified,  varnishd  will
          listen on all available IPv4 and IPv6 interfaces. If port is not
          specified, port 80 (http) is used.  An additional protocol  type
          can  be set for the listening socket with PROTO.  Valid protocol
          types are: HTTP/1  (default),  and  PROXY.   Multiple  listening
          addresses can be specified by using multiple -a arguments.

   -b <host[:port]>
          Use  the  specified  host  as  backend  server.  If  port is not
          specified, the default is 8080.

   -C     Print VCL code compiled to C language and exit. Specify the  VCL
          file to compile with the -f option.

   -d     Enables   debugging   mode:  The  parent  process  runs  in  the
          foreground with a CLI connection on stdin/stdout, and the  child
          process   must   be  started  explicitly  with  a  CLI  command.
          Terminating the parent process will also terminate the child.

   -F     Do not fork, run in the foreground.

   -f config
          Use the specified VCL configuration file instead of the  builtin
          default.  See vcl(7) for details on VCL syntax.

          When neither a -f nor a -b argument are given, varnishd will not
          start the worker process but process cli commands.

   -h <type[,options]>
          Specifies the hash algorithm. See Hash Algorithm Options  for  a
          list of supported algorithms.

   -i identity
          Specify the identity of the Varnish server. This can be accessed
          using server.identity from VCL.

   -j <jail[,jailoptions]>
          Specify the jailing technology to use.

   -l <vsl[,vsm]>
          Specifies size of shmlog file. vsl is  the  space  for  the  VSL
          records  [80M]  and  vsm  is  the space for stats counters [1M].
          Scaling suffixes like 'K' and 'M' can be used up to (G)igabytes.
          Default is 81 Megabytes.

   -M <address:port>
          Connect  to  this  port  and  offer  the command line interface.
          Think of it as a reverse shell. When running with -M  and  there
          is  no  backend  defined  the child process (the cache) will not
          start initially.

   -n name
          Specify the name for this instance.  Amongst other things,  this
          name  is  used  to  construct the name of the directory in which
          varnishd keeps temporary files  and  persistent  state.  If  the
          specified name begins with a forward slash, it is interpreted as
          the absolute path to the directory which should be used for this
          purpose.

   -P file
          Write the PID of the process to the specified file.

   -p <param=value>
          Set the parameter specified by param to the specified value, see
          List of Parameters for details. This option can be used multiple
          times to specify multiple parameters.

   -r <param[,param...]>
          Make  the  listed  parameters  read  only. This gives the system
          administrator a way to  limit  what  the  Varnish  CLI  can  do.
          Consider     making     parameters     such    as    cc_command,
          vcc_allow_inline_c  and  vmod_path  read  only  as   these   can
          potentially be used to escalate privileges from the CLI.

   -S secret-file
          Path  to  a file containing a secret used for authorizing access
          to the management port. If not provided a  new  secret  will  be
          drawn from the system PRNG.  To disable authentication use none.

   -s <[name=]type[,options]>
          Use the specified storage backend, see Storage Backend Options.

          This  option  can  be  used  multiple  times to specify multiple
          storage files. Names are referenced in  logs,  VCL,  statistics,
          etc.

   -T <address[:port]>
          Offer  a management interface on the specified address and port.
          See Management Interface for a list of management commands.   To
          disable the management interface use none.

   -t TTL Specifies  the  default  time  to live (TTL) for cached objects.
          This is a  shortcut  for  specifying  the  default_ttl  run-time
          parameter.

   -V     Display the version number and exit.

   -W waiter
          Specifies the waiter type to use.

   Hash Algorithm Options
   The following hash algorithms are available:

   -h critbit
          self-scaling  tree  structure.  The  default  hash  algorithm in
          Varnish  Cache  2.1  and  onwards.  In  comparison  to  a   more
          traditional  B  tree  the  critbit  tree  is  almost  completely
          lockless. Do not change this unless you are certain what  you're
          doing.

   -h simple_list
          A  simple  doubly-linked  list.   Not recommended for production
          use.

   -h <classic[,buckets]>
          A standard hash table. The hash key is the CRC32 of the object's
          URL  modulo the size of the hash table.  Each table entry points
          to a list of elements which share the same hash key. The buckets
          parameter  specifies  the  number  of entries in the hash table.
          The default is 16383.

   Storage Backend Options
   The following storage types are available:

   -s <malloc[,size]>
          malloc is a memory based backend.

   -s <file,path[,size[,granularity]]>
          The file backend stores data in a file on disk. The file will be
          accessed using mmap.

          The  path  is  mandatory.  If  path  points  to  a  directory, a
          temporary file will be created in that directory and immediately
          unlinked.  If  path points to a non-existing file, the file will
          be created.

          If size is omitted, and path points to an existing file  with  a
          size  greater  than zero, the size of that file will be used. If
          not, an error is reported.

          Granularity sets the allocation  block  size.  Defaults  to  the
          system  page  size  or  the  filesystem block size, whichever is
          larger.

   -s <persistent,path,size>
          Persistent storage. Varnish will store objects in a  file  in  a
          manner  that  will secure the survival of most of the objects in
          the event of a planned or unplanned  shutdown  of  Varnish.  The
          persistent  storage backend has multiple issues with it and will
          likely be removed from a future version of Varnish.

   Jail Options
   Varnish jails are  a  generalization  over  various  platform  specific
   methods  to  reduce  the privileges of varnish processes. They may have
   specific options. Available jails are:

   -j solaris
          Reduce  privileges(5)  for  varnishd  and  sub-process  to   the
          minimally  required  set. Only available on platforms which have
          the setppriv(2) call.

   -j <unix[,user=`user`][,ccgroup=`group`]>
          Default on all other platforms if  varnishd  is  either  started
          with an effective uid of 0 ("as root") or as user varnish.

          With  the unix jail technology activated, varnish will switch to
          an alternative user for subprocesses and  change  the  effective
          uid of the master process whenever possible.

          The  optional  user argument specifies which alternative user to
          use. It defaults to varnish

          The optional ccgroup  argument  specifies  a  group  to  add  to
          varnish  subprocesses requiring access to a c-compiler. There is
          no default.

   -j none
          last resort jail choice: With jail technology none, varnish will
          run all processes with the privileges it was started with.

   Management Interface
   If  the  -T  option  was  specified, varnishd will offer a command-line
   management  interface  on  the  specified  address   and   port.    The
   recommended  way of connecting to the command-line management interface
   is through varnishadm(1).

   The commands available are documented in varnish(7).

RUN TIME PARAMETERS

   Run Time Parameter Flags
   Runtime parameters are marked with shorthand flags to  avoid  repeating
   the  same  text  over  and  over in the table below. The meaning of the
   flags are:

   * experimental

     We have no solid information about good/bad/optimal values  for  this
     parameter.   Feedback  with  experience  and  observations  are  most
     welcome.

   * delayed

     This parameter can be changed on the fly, but will  not  take  effect
     immediately.

   * restart

     The  worker  process  must  be  stopped  and  restarted,  before this
     parameter takes effect.

   * reload

     The VCL programs must be reloaded for this parameter to take effect.

   * experimental

     We're not really sure about this parameter, tell us what you find.

   * wizard

     Do not touch unless you really know what you're doing.

   * only_root

     Only works if varnishd is running as root.

   Default Value Exceptions on 32 bit Systems
   Be aware that on 32 bit systems, certain  default  values  are  reduced
   relative to the values listed below, in order to conserve VM space:

   * workspace_client: 16k

   * http_resp_size: 8k

   * http_req_size: 12k

   * gzip_stack_buffer: 4k

   * thread_pool_stack: 64k

   List of Parameters
   This  text  is  produced from the same text you will find in the CLI if
   you use the param.show command:

   accept_filter
      * Units: bool

      * Default: off

      * Flags: must_restart

   Enable kernel accept-filters (if available in the kernel).

   acceptor_sleep_decay
      * Default: 0.9

      * Minimum: 0

      * Maximum: 1

      * Flags: experimental

   If we run out of resources, such as file descriptors or worker threads,
   the    acceptor   will   sleep   between   accepts.    This   parameter
   (multiplicatively)  reduce  the  sleep  duration  for  each  successful
   accept. (ie: 0.9 = reduce by 10%)

   acceptor_sleep_incr
      * Units: seconds

      * Default: 0.000

      * Minimum: 0.000

      * Maximum: 1.000

      * Flags: experimental

   If we run out of resources, such as file descriptors or worker threads,
   the acceptor will sleep between accepts.  This  parameter  control  how
   much longer we sleep, each time we fail to accept a new connection.

   acceptor_sleep_max
      * Units: seconds

      * Default: 0.050

      * Minimum: 0.000

      * Maximum: 10.000

      * Flags: experimental

   If we run out of resources, such as file descriptors or worker threads,
   the acceptor will sleep between accepts.   This  parameter  limits  how
   long it can sleep between attempts to accept new connections.

   auto_restart
      * Units: bool

      * Default: on

   Automatically restart the child/worker process if it dies.

   backend_idle_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 1.000

   Timeout before we close unused backend connections.

   ban_dups
      * Units: bool

      * Default: on

   Eliminate older identical bans when a new ban is added.  This saves CPU
   cycles by not comparing objects to identical bans.  This is a waste  of
   time if you have many bans which are never identical.

   ban_lurker_age
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

   The ban lurker will ignore bans until they are this old.  When a ban is
   added, the active traffic will be tested against it as part  of  object
   lookup.  Because many applications issue bans in bursts, this parameter
   holds the ban-lurker off until the rush is over.  This should be set to
   the approximate time which a ban-burst takes.

   ban_lurker_batch
      * Default: 1000

      * Minimum: 1

   The  ban  lurker  sleeps  ${ban_lurker_sleep} after examining this many
   objects.  Use  this  to  pace  the  ban-lurker  if  it  eats  too  many
   resources.

   ban_lurker_holdoff
      * Units: seconds

      * Default: 0.010

      * Minimum: 0.000

      * Flags: experimental

   How  long  the  ban lurker sleeps when giving way to lookup due to lock
   contention.

   ban_lurker_sleep
      * Units: seconds

      * Default: 0.010

      * Minimum: 0.000

   How long the ban  lurker  sleeps  after  examining  ${ban_lurker_batch}
   objects.   Use  this  to  pace  the  ban-lurker  if  it  eats  too many
   resources.  A value of zero will disable the ban lurker entirely.

   between_bytes_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

   We only wait for this many seconds  between  bytes  received  from  the
   backend  before  giving up the fetch.  A value of zero means never give
   up.  VCL values, per backend or per backend  request  take  precedence.
   This parameter does not apply to pipe'ed requests.

   cc_command
      * Default:            "exec           gcc           -g           -O2
        -fdebug-prefix-map=/build/varnish-scDbzE/varnish-5.0.0=.     -fPIE
        -fstack-protector-strong      -Wformat     -Werror=format-security
        -fexcess-precision=standard -Wall -Werror -Wno-error=unused-result
        -pthread -fpic -shared -Wl,-x -o %o %s"

      * Flags: must_reload

   Command  used  for  compiling the C source code to a dlopen(3) loadable
   object.  Any occurrence of %s in the string will be replaced  with  the
   source file name, and %o will be replaced with the output file name.

   cli_buffer
      * Units: bytes

      * Default: 8k

      * Minimum: 4k

   Size of buffer for CLI command input.  You may need to increase this if
   you have big VCL files and use the vcl.inline CLI command.  NB: Must be
   specified with -p to have effect.

   cli_limit
      * Units: bytes

      * Default: 48k

      * Minimum: 128b

      * Maximum: 99999999b

   Maximum  size of CLI response.  If the response exceeds this limit, the
   response code will be 201  instead  of  200  and  the  last  line  will
   indicate the truncation.

   cli_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

   Timeout for the childs replies to CLI requests from the mgt_param.

   clock_skew
      * Units: seconds

      * Default: 10

      * Minimum: 0

   How much clockskew we are willing to accept between the backend and our
   own clock.

   connect_timeout
      * Units: seconds

      * Default: 3.500

      * Minimum: 0.000

   Default connection timeout for backend  connections.  We  only  try  to
   connect  to the backend for this many seconds before giving up. VCL can
   override this default value for each backend and backend request.

   critbit_cooloff
      * Units: seconds

      * Default: 180.000

      * Minimum: 60.000

      * Maximum: 254.000

      * Flags: wizard

   How long the critbit hasher keeps deleted objheads on the cooloff list.

   debug
      * Default: none

   Enable/Disable various kinds of debugging.

      none   Disable all debugging

   Use +/- prefix to set/reset individual bits:

      req_state
             VSL Request state engine

      workspace
             VSL Workspace operations

      waiter VSL Waiter internals

      waitinglist
             VSL Waitinglist events

      syncvsl
             Make VSL synchronous

      hashedge
             Edge cases in Hash

      vclrel Rapid VCL release

      lurker VSL Ban lurker

      esi_chop
             Chop ESI fetch to bits

      flush_head
             Flush after http1 head

      vtc_mode
             Varnishtest Mode

      witness
             Emit WITNESS lock records

      vsm_keep
             Keep the VSM file on restart

   default_grace
      * Units: seconds

      * Default: 10.000

      * Minimum: 0.000

      * Flags: obj_sticky

   Default grace period.  We will deliver an object this long after it has
   expired, provided another thread is attempting to get a new copy.

   default_keep
      * Units: seconds

      * Default: 0.000

      * Minimum: 0.000

      * Flags: obj_sticky

   Default  keep  period.  We will keep a useless object around this long,
   making it available for conditional backend fetches.  That  means  that
   the object will be removed from the cache at the end of ttl+grace+keep.

   default_ttl
      * Units: seconds

      * Default: 120.000

      * Minimum: 0.000

      * Flags: obj_sticky

   The  TTL  assigned  to  objects if neither the backend nor the VCL code
   assigns one.

   feature
      * Default: none

   Enable/Disable various minor features.

      none   Disable all features.

   Use +/- prefix to enable/disable individual feature:

      short_panic
             Short panic message.

      wait_silo
             Wait for persistent silo.

      no_coredump
             No coredumps.

      esi_ignore_https
             Treat HTTPS as HTTP in ESI:includes

      esi_disable_xml_check
             Don't check of body looks like XML

      esi_ignore_other_elements
             Ignore non-esi XML-elements

      esi_remove_bom
             Remove UTF-8 BOM

      https_scheme
             Also split https URIs

      http2  Support HTTP/2 protocol

   fetch_chunksize
      * Units: bytes

      * Default: 16k

      * Minimum: 4k

      * Flags: experimental

   The default chunksize used by fetcher. This should be bigger  than  the
   majority   of   objects  with  short  TTLs.   Internal  limits  in  the
   storage_file module makes increases above 128kb a dubious idea.

   fetch_maxchunksize
      * Units: bytes

      * Default: 0.25G

      * Minimum: 64k

      * Flags: experimental

   The maximum chunksize we attempt to allocate from storage. Making  this
   too large may cause delays and storage fragmentation.

   first_byte_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

   Default timeout for receiving first byte from backend. We only wait for
   this many seconds for the first byte before giving up.  A  value  of  0
   means  it  will never time out. VCL can override this default value for
   each backend and backend request. This  parameter  does  not  apply  to
   pipe.

   gzip_buffer
      * Units: bytes

      * Default: 32k

      * Minimum: 2k

      * Flags: experimental

   Size of malloc buffer used for gzip processing.  These buffers are used
   for in-transit data, for  instance  gunzip'ed  data  being  sent  to  a
   client.Making  this  space to small results in more overhead, writes to
   sockets etc, making it too big is probably just a waste of memory.

   gzip_level
      * Default: 6

      * Minimum: 0

      * Maximum: 9

   Gzip compression level: 0=debug, 1=fast, 9=best

   gzip_memlevel
      * Default: 8

      * Minimum: 1

      * Maximum: 9

   Gzip memory level 1=slow/least, 9=fast/most compression.  Memory impact
   is 1=1k, 2=2k, ... 9=256k.

   http_gzip_support
      * Units: bool

      * Default: on

   Enable  gzip  support.  When enabled Varnish request compressed objects
   from the backend and store  them  compressed.  If  a  client  does  not
   support  gzip  encoding  Varnish  will uncompress compressed objects on
   demand. Varnish will also rewrite the Accept-Encoding header of clients
   indicating support for gzip to:
          Accept-Encoding: gzip

   Clients that do not support gzip will have their Accept-Encoding header
   removed. For more information on how gzip is implemented please see the
   chapter on gzip in the Varnish reference.

   http_max_hdr
      * Units: header lines

      * Default: 64

      * Minimum: 32

      * Maximum: 65535

   Maximum    number    of    HTTP    header    lines    we    allow    in
   {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number
   of  headers).   Cheap,  ~20  bytes, in terms of workspace memory.  Note
   that the first line occupies five header lines.

   http_range_support
      * Units: bool

      * Default: on

   Enable support for HTTP Range headers.

   http_req_hdr_len
      * Units: bytes

      * Default: 8k

      * Minimum: 40b

   Maximum length of any HTTP client request header we  will  allow.   The
   limit is inclusive its continuation lines.

   http_req_size
      * Units: bytes

      * Default: 32k

      * Minimum: 0.25k

   Maximum number of bytes of HTTP client request we will deal with.  This
   is a limit on all bytes up to the double blank line which ends the HTTP
   request.   The  memory  for  the  request  is allocated from the client
   workspace (param: workspace_client) and this parameter limits how  much
   of that the request is allowed to take up.

   http_resp_hdr_len
      * Units: bytes

      * Default: 8k

      * Minimum: 40b

   Maximum  length of any HTTP backend response header we will allow.  The
   limit is inclusive its continuation lines.

   http_resp_size
      * Units: bytes

      * Default: 32k

      * Minimum: 0.25k

   Maximum number of bytes of HTTP backend response  we  will  deal  with.
   This is a limit on all bytes up to the double blank line which ends the
   HTTP request.  The memory for the request is allocated from the backend
   workspace (param: workspace_backend) and this parameter limits how much
   of that the request is allowed to take up.

   idle_send_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

      * Flags: delayed

   Time to wait with no data sent. If no data has been transmitted in this
   many   seconds   the   session  is  closed.   See  setsockopt(2)  under
   SO_SNDTIMEO for more information.

   listen_depth
      * Units: connections

      * Default: 1024

      * Minimum: 0

      * Flags: must_restart

   Listen queue depth.

   lru_interval
      * Units: seconds

      * Default: 2.000

      * Minimum: 0.000

      * Flags: experimental

   Grace period before object moves on LRU list.  Objects are  only  moved
   to  the front of the LRU list if they have not been moved there already
   inside this timeout period.  This reduces the amount of lock operations
   necessary for LRU list access.

   max_esi_depth
      * Units: levels

      * Default: 5

      * Minimum: 0

   Maximum depth of esi:include processing.

   max_restarts
      * Units: restarts

      * Default: 4

      * Minimum: 0

   Upper  limit  on  how  many times a request can restart.  Be aware that
   restarts are likely to cause  a  hit  against  the  backend,  so  don't
   increase thoughtlessly.

   max_retries
      * Units: retries

      * Default: 4

      * Minimum: 0

   Upper limit on how many times a backend fetch can retry.

   nuke_limit
      * Units: allocations

      * Default: 50

      * Minimum: 0

      * Flags: experimental

   Maximum number of objects we attempt to nuke in order to make space for
   a object body.

   pcre_match_limit
      * Default: 10000

      * Minimum: 1

   The limit for the number of calls to the internal match()  function  in
   pcre_exec().

   (See: PCRE_EXTRA_MATCH_LIMIT in pcre docs.)

   This parameter limits how much CPU time regular expression matching can
   soak up.

   pcre_match_limit_recursion
      * Default: 20

      * Minimum: 1

   The recursion depth-limit  for  the  internal  match()  function  in  a
   pcre_exec().

   (See: PCRE_EXTRA_MATCH_LIMIT_RECURSION in pcre docs.)

   This  puts  an  upper  limit  on  the  amount of stack used by PCRE for
   certain classes of regular expressions.

   We have set the default value low in order to prevent crashes,  at  the
   cost of possible regexp matching failures.

   Matching  failures  will  show up in the log as VCL_Error messages with
   regexp errors -27 or -21.

   Testcase r01576 can be useful when tuning this parameter.

   ping_interval
      * Units: seconds

      * Default: 3

      * Minimum: 0

      * Flags: must_restart

   Interval between pings from parent to child.  Zero will disable pinging
   entirely, which makes it possible to attach a debugger to the child.

   pipe_timeout
      * Units: seconds

      * Default: 60.000

      * Minimum: 0.000

   Idle timeout for PIPE sessions. If nothing have been received in either
   direction for this many seconds, the session is closed.

   pool_req
      * Default: 10,100,10

   Parameters for per worker pool request memory pool.  The three  numbers
   are:

      min_pool
             minimum size of free pool.

      max_pool
             maximum size of free pool.

      max_age
             max age of free element.

   pool_sess
      * Default: 10,100,10

   Parameters  for per worker pool session memory pool.  The three numbers
   are:

      min_pool
             minimum size of free pool.

      max_pool
             maximum size of free pool.

      max_age
             max age of free element.

   pool_vbo
      * Default: 10,100,10

   Parameters for backend object fetch memory  pool.   The  three  numbers
   are:

      min_pool
             minimum size of free pool.

      max_pool
             maximum size of free pool.

      max_age
             max age of free element.

   prefer_ipv6
      * Units: bool

      * Default: off

   Prefer  IPv6  address  when connecting to backends which have both IPv4
   and IPv6 addresses.

   rush_exponent
      * Units: requests per request

      * Default: 3

      * Minimum: 2

      * Flags: experimental

   How many parked request we start for  each  completed  request  on  the
   object.   NB:  Even  with the implict delay of delivery, this parameter
   controls an exponential increase in number of worker threads.

   send_timeout
      * Units: seconds

      * Default: 600.000

      * Minimum: 0.000

      * Flags: delayed

   Send timeout for client connections. If the HTTP response  hasn't  been
   transmitted   in   this  many  seconds  the  session  is  closed.   See
   setsockopt(2) under SO_SNDTIMEO for more information.

   shm_reclen
      * Units: bytes

      * Default: 255b

      * Minimum: 16b

      * Maximum: 4084

   Old name for vsl_reclen, use that instead.

   shortlived
      * Units: seconds

      * Default: 10.000

      * Minimum: 0.000

   Objects created with (ttl+grace+keep) shorter than this are always  put
   in transient storage.

   sigsegv_handler
      * Units: bool

      * Default: on

      * Flags: must_restart

   Install  a  signal  handler  which  tries  to dump debug information on
   segmentation faults, bus errors and abort signals.

   syslog_cli_traffic
      * Units: bool

      * Default: on

   Log all CLI traffic to syslog(LOG_INFO).

   tcp_fastopen
      * Units: bool

      * Default: off

      * Flags: must_restart

   Enable TCP Fast Open extension (if available in the kernel).

   tcp_keepalive_intvl
      * Units: seconds

      * Default: 75.000

      * Minimum: 1.000

      * Maximum: 100.000

      * Flags: experimental

   The number of seconds between TCP keep-alive probes.

   tcp_keepalive_probes
      * Units: probes

      * Default: 9

      * Minimum: 1

      * Maximum: 100

      * Flags: experimental

   The maximum number of TCP keep-alive probes to send  before  giving  up
   and  killing  the  connection if no response is obtained from the other
   end.

   tcp_keepalive_time
      * Units: seconds

      * Default: 7200.000

      * Minimum: 1.000

      * Maximum: 7200.000

      * Flags: experimental

   The number of seconds a connection needs to be idle before  TCP  begins
   sending out keep-alive probes.

   thread_pool_add_delay
      * Units: seconds

      * Default: 0.000

      * Minimum: 0.000

      * Flags: experimental

   Wait at least this long after creating a thread.

   Some  (buggy)  systems  may  need  a  short  (sub-second) delay between
   creating threads.  Set this to  a  few  milliseconds  if  you  see  the
   'threads_failed' counter grow too much.

   Setting this too high results in insufficient worker threads.

   thread_pool_destroy_delay
      * Units: seconds

      * Default: 1.000

      * Minimum: 0.010

      * Flags: delayed, experimental

   Wait this long after destroying a thread.

   This controls the decay of thread pools when idle(-ish).

   thread_pool_fail_delay
      * Units: seconds

      * Default: 0.200

      * Minimum: 0.010

      * Flags: experimental

   Wait at least this long after a failed thread creation before trying to
   create another thread.

   Failure to create a worker thread is often a  sign  that   the  end  is
   near,  because the process is running out of some resource.  This delay
   tries to not rush the end on needlessly.

   If thread creation failures are a problem, check  that  thread_pool_max
   is not too high.

   It  may  also help to increase thread_pool_timeout and thread_pool_min,
   to reduce the rate at which treads are destroyed and later recreated.

   thread_pool_max
      * Units: threads

      * Default: 5000

      * Minimum: 100

      * Flags: delayed

   The maximum number of worker threads in each pool.

   Do not set this higher than you have to, since  excess  worker  threads
   soak  up  RAM and CPU and generally just get in the way of getting work
   done.

   thread_pool_min
      * Units: threads

      * Default: 100

      * Maximum: 5000

      * Flags: delayed

   The minimum number of worker threads in each pool.

   Increasing this may help ramp up faster from  low  load  situations  or
   when threads have expired.

   Minimum is 10 threads.

   thread_pool_stack
      * Units: bytes

      * Default: 48k

      * Minimum: 16k

      * Flags: experimental

   Worker thread stack size.  This will likely be rounded up to a multiple
   of 4k (or whatever the page_size might be) by the kernel.

   thread_pool_timeout
      * Units: seconds

      * Default: 300.000

      * Minimum: 10.000

      * Flags: delayed, experimental

   Thread idle threshold.

   Threads in excess of thread_pool_min, which have been idle for at least
   this long, will be destroyed.

   thread_pools
      * Units: pools

      * Default: 2

      * Minimum: 1

      * Maximum: 32

      * Flags: delayed, experimental

   Number of worker thread pools.

   Increasing  the  number of worker pools decreases lock contention. Each
   worker pool also has a thread accepting new connections,  so  for  very
   high  rates  of  incoming  new  connections on systems with many cores,
   increasing the worker pools may be required.

   Too many pools waste CPU and RAM resources, and more than one pool  for
   each CPU is most likely detrimental to performance.

   Can  be  increased  on the fly, but decreases require a restart to take
   effect.

   thread_queue_limit
      * Default: 20

      * Minimum: 0

      * Flags: experimental

   Permitted queue length per thread-pool.

   This sets the  number  of  requests  we  will  queue,  waiting  for  an
   available thread.  Above this limit sessions will be dropped instead of
   queued.

   thread_stats_rate
      * Units: requests

      * Default: 10

      * Minimum: 0

      * Flags: experimental

   Worker threads accumulate statistics, and dump these  into  the  global
   stats   counters   if   the  lock  is  free  when  they  finish  a  job
   (request/fetch etc.)  This parameters defines  the  maximum  number  of
   jobs  a  worker  thread  may  handle,  before  it is forced to dump its
   accumulated stats into the global counters.

   timeout_idle
      * Units: seconds

      * Default: 5.000

      * Minimum: 0.000

   Idle timeout for client connections.  A connection is considered  idle,
   until we have received the full request headers.

   timeout_linger
      * Units: seconds

      * Default: 0.050

      * Minimum: 0.000

      * Flags: experimental

   How long the worker thread lingers on an idle session before handing it
   over to the waiter.  When sessions are reused, as much as half  of  all
   reuses  happen  within  the  first  100  msec  of  the previous request
   completing.  Setting this too high results in worker threads not  doing
   anything  for  their  keep,  setting  it  too  low just means that more
   sessions take a detour around the waiter.

   vcc_allow_inline_c
      * Units: bool

      * Default: off

   Allow inline C code in VCL.

   vcc_err_unref
      * Units: bool

      * Default: on

   Unreferenced VCL objects result in error.

   vcc_unsafe_path
      * Units: bool

      * Default: on

   Allow '/' in vmod & include paths.  Allow 'import ... from ...'.

   vcl_cooldown
      * Units: seconds

      * Default: 600.000

      * Minimum: 0.000

   How long a VCL is kept warm after being  replaced  as  the  active  VCL
   (granularity approximately 30 seconds).

   vcl_dir
      * Default: /etc/varnish:/usr/share/varnish/vcl

   Old name for vcl_path, use that instead.

   vcl_path
      * Default: /etc/varnish:/usr/share/varnish/vcl

   Directory  (or colon separated list of directories) from which relative
   VCL filenames (vcl.load and include)  are  to  be  found.   By  default
   Varnish  searches VCL files in both the system configuration and shared
   data directories to allow  packages  to  drop  their  VCL  files  in  a
   standard location where relative includes would work.

   vmod_dir
      * Default: /usr/lib/x86_64-linux-gnu/varnish/vmods

   Old name for vmod_path, use that instead.

   vmod_path
      * Default: /usr/lib/x86_64-linux-gnu/varnish/vmods

   Directory  (or  colon separated list of directories) where VMODs are to
   be found.

   vsl_buffer
      * Units: bytes

      * Default: 4k

      * Minimum: 267

   Bytes of (req-/backend-)workspace dedicated to buffering  VSL  records.
   Setting  this too high costs memory, setting it too low will cause more
   VSL flushes and likely increase lock-contention on the VSL mutex.

   The minimum tracks the vsl_reclen parameter + 12 bytes.

   vsl_mask
      * Default: -VCL_trace,-WorkThread,-Hash,-VfpAcct

   Mask individual VSL messages from being logged.

      default
             Set default value

   Use +/- prefix in front of VSL tag name, to mask/unmask individual  VSL
   messages.

   vsl_reclen
      * Units: bytes

      * Default: 255b

      * Minimum: 16b

      * Maximum: 4084b

   Maximum number of bytes in SHM log record.

   The maximum tracks the vsl_buffer parameter - 12 bytes.

   vsl_space
      * Units: bytes

      * Default: 80M

      * Minimum: 1M

      * Flags: must_restart

   The  amount  of  space  to  allocate for the VSL fifo buffer in the VSM
   memory segment.  If you make this too small, varnish{ncsa|log} etc will
   not  be  able  to  keep  up.   Making  it  too  large just costs memory
   resources.

   vsm_free_cooldown
      * Units: seconds

      * Default: 60.000

      * Minimum: 10.000

      * Maximum: 600.000

   How long VSM memory is kept  warm  after  a  deallocation  (granularity
   approximately 2 seconds).

   vsm_space
      * Units: bytes

      * Default: 1M

      * Minimum: 1M

      * Flags: must_restart

   The  amount  of  space to allocate for stats counters in the VSM memory
   segment.  If you make this too small, some counters will be  invisible.
   Making it too large just costs memory resources.

   workspace_backend
      * Units: bytes

      * Default: 64k

      * Minimum: 1k

      * Flags: delayed

   Bytes  of HTTP protocol workspace for backend HTTP req/resp.  If larger
   than 4k, use a multiple of 4k for VM efficiency.

   workspace_client
      * Units: bytes

      * Default: 64k

      * Minimum: 9k

      * Flags: delayed

   Bytes of HTTP protocol workspace for clients HTTP req/resp.  If  larger
   than 4k, use a multiple of 4k for VM efficiency.

   workspace_session
      * Units: bytes

      * Default: 0.50k

      * Minimum: 0.25k

      * Flags: delayed

   Allocation  size  for session structure and workspace.    The workspace
   is primarily used for TCP connection addresses.  If larger than 4k, use
   a multiple of 4k for VM efficiency.

   workspace_thread
      * Units: bytes

      * Default: 2k

      * Minimum: 0.25k

      * Maximum: 8k

      * Flags: delayed

   Bytes  of  auxiliary  workspace per thread.  This workspace is used for
   certain temporary data structures during  the  operation  of  a  worker
   thread.   One  use  is  for  the  io-vectors  for  writing requests and
   responses to sockets, having too  little  space  will  result  in  more
   writev(2) system calls, having too much just wastes the space.

EXIT CODES

   Varnish  and  bundled  tools  will, in most cases, exit with one of the
   following codes

   * 0 OK

   * 1 Some error which could be system-dependent and/or transient

   * 2 Serious configuration / parameter error - retrying  with  the  same
     configuration / parameters is most likely useless

   The varnishd master process may also OR its exit code

   * with 0x20 when the varnishd child process died,

   * with  0x40 when the varnishd child process was terminated by a signal
     and

   * with 0x80 when a core was dumped.

SEE ALSO

   * varnishlog(1)

   * varnishhist(1)

   * varnishncsa(1)

   * varnishstat(1)

   * varnishtop(1)

   * varnish-cli(7)

   * vcl(7)

HISTORY

   The varnishd daemon was developed by Poul-Henning Kamp  in  cooperation
   with Verdens Gang AS and Varnish Software.

   This  manual  page  was  written by Dag-Erling Smrgrav with updates by
   Stig Sandbeck Mathisen <[email protected]>, Nils Goroll and others.

COPYRIGHT

   This document is licensed under the same licence as Varnish itself. See
   LICENCE for details.

   * Copyright (c) 2007-2015 Varnish Software AS

                                                               VARNISHD(1)



Opportunity


Personal Opportunity - Free software gives you access to billions of dollars of software at no cost. Use this software for your business, personal use or to develop a profitable skill. Access to source code provides access to a level of capabilities/information that companies protect though copyrights. Open source is a core component of the Internet and it is available to you. Leverage the billions of dollars in resources and capabilities to build a career, establish a business or change the world. The potential is endless for those who understand the opportunity.

Business Opportunity - Goldman Sachs, IBM and countless large corporations are leveraging open source to reduce costs, develop products and increase their bottom lines. Learn what these companies know about open source and how open source can give you the advantage.


Free Software


Free Software provides computer programs and capabilities at no cost but more importantly, it provides the freedom to run, edit, contribute to, and share the software. The importance of free software is a matter of access, not price. Software at no cost is a benefit but ownership rights to the software and source code is far more significant.

Free Office Software - The Libre Office suite provides top desktop productivity tools for free. This includes, a word processor, spreadsheet, presentation engine, drawing and flowcharting, database and math applications. Libre Office is available for Linux or Windows.


Free Books


The Free Books Library is a collection of thousands of the most popular public domain books in an online readable format. The collection includes great classical literature and more recent works where the U.S. copyright has expired. These books are yours to read and use without restrictions.

Source Code - Want to change a program or know how it works? Open Source provides the source code for its programs so that anyone can use, modify or learn how to write those programs themselves. Visit the GNU source code repositories to download the source.


Education


Study at Harvard, Stanford or MIT - Open edX provides free online courses from Harvard, MIT, Columbia, UC Berkeley and other top Universities. Hundreds of courses for almost all major subjects and course levels. Open edx also offers some paid courses and selected certifications.

Linux Manual Pages - A man or manual page is a form of software documentation found on Linux/Unix operating systems. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts.