2016-03-14 47 views
0

運行Varnish 4.0.3並查看varnishstat時,它報告的緩存命中率約爲0.9 - 0.95,我認爲這是90-95%緩存命中率(不正確的?)。我的傳球數很低,所以我看不到有很多無法緩存的流量。困擾我的部分是儘管如此,後端服務器仍在工作很多。當我看到iftop時,我發現服務器(不運行任何其他應用程序)正在以大約19Mbit(包括對客戶端的請求和對後端服務器的請求頭)進行傳輸,並從後端服務器接收大約14Mbit的數據。光油4:緩存命中高但後端流量高

我可以補充一點,我在緩存中也有一個小的寬限期,我發現大約每秒1-2次的請求被計入點擊數,但仍然向後端發出請求,但是這只是一個無關緊要的請求數。

我必須錯過應該是明顯的東西。但是什麼?

編輯:

 

    MAIN.uptime    70613   1.00 Child process uptime 
    MAIN.sess_conn   43313   0.61 Sessions accepted 
    MAIN.sess_drop    0   0.00 Sessions dropped 
    MAIN.sess_fail    0   0.00 Session accept failures 
    MAIN.sess_pipe_overflow   0   0.00 Session pipe overflow 
    MAIN.client_req_400    0   0.00 Client requests received, subject to 400 errors 
    MAIN.client_req_411    0   0.00 Client requests received, subject to 411 errors 
    MAIN.client_req_413    0   0.00 Client requests received, subject to 413 errors 
    MAIN.client_req_417    0   0.00 Client requests received, subject to 417 errors 
    MAIN.client_req    6438096  91.17 Good client requests received 
    MAIN.cache_hit    5231719  74.09 Cache hits 
    MAIN.cache_hitpass   149819   2.12 Cache hits for pass 
    MAIN.cache_miss    619678   8.78 Cache misses 
    MAIN.backend_conn    19491   0.28 Backend conn. success 
    MAIN.backend_unhealthy    0   0.00 Backend conn. not attempted 
    MAIN.backend_busy     0   0.00 Backend conn. too many 
    MAIN.backend_fail     0   0.00 Backend conn. failures 
    MAIN.backend_reuse   1279156  18.12 Backend conn. reuses 
    MAIN.backend_toolate   5046   0.07 Backend conn. was closed 
    MAIN.backend_recycle   1284221  18.19 Backend conn. recycles 
    MAIN.backend_retry     5   0.00 Backend conn. retry 
    MAIN.fetch_head     7   0.00 Fetch no body (HEAD) 
    MAIN.fetch_length    166348   2.36 Fetch with Length 
    MAIN.fetch_chunked   861097  12.19 Fetch chunked 
    MAIN.fetch_eof      0   0.00 Fetch EOF 
    MAIN.fetch_bad      0   0.00 Fetch bad T-E 
    MAIN.fetch_close     862   0.01 Fetch wanted close 
    MAIN.fetch_oldhttp     0   0.00 Fetch pre HTTP/1.1 closed 
    MAIN.fetch_zero    192834   2.73 Fetch zero len body 
    MAIN.fetch_1xx      0   0.00 Fetch no body (1xx) 
    MAIN.fetch_204      0   0.00 Fetch no body (204) 
    MAIN.fetch_304     77295   1.09 Fetch no body (304) 
    MAIN.fetch_failed     0   0.00 Fetch failed (all causes) 
    MAIN.fetch_no_thread    0   0.00 Fetch failed (no thread) 
    MAIN.pools       2   . Number of thread pools 
    MAIN.threads      400   . Total number of threads 
    MAIN.threads_limited    0   0.00 Threads hit max 
    MAIN.threads_created    400   0.01 Threads created 
    MAIN.threads_destroyed    0   0.00 Threads destroyed 
    MAIN.threads_failed    0   0.00 Thread creation failed 
    MAIN.thread_queue_len    0   . Length of session queue 
    MAIN.busy_sleep     22   0.00 Number of requests sent to sleep on busy objhdr 
    MAIN.busy_wakeup     22   0.00 Number of requests woken after sleep on busy objhdr 
    MAIN.sess_queued     0   0.00 Sessions queued for thread 
    MAIN.sess_dropped     0   0.00 Sessions dropped for thread 
    MAIN.n_object     61732   . object structs made 
    MAIN.n_vampireobject    0   . unresurrected objects 
    MAIN.n_objectcore    61791   . objectcore structs made 
    MAIN.n_objecthead    59996   . objecthead structs made 
    MAIN.n_waitinglist    100   . waitinglist structs made 
    MAIN.n_backend      8   . Number of backends 
    MAIN.n_expired    412530   . Number of expired objects 
    MAIN.n_lru_nuked     0   . Number of LRU nuked objects 
    MAIN.n_lru_moved    1938371   . Number of LRU moved objects 
    MAIN.losthdr      0   0.00 HTTP header overflows 
    MAIN.s_sess     43313   0.61 Total sessions seen 
    MAIN.s_req     6438096  91.17 Total requests seen 
    MAIN.s_pipe      192   0.00 Total pipe sessions seen 
    MAIN.s_pass     586507   8.31 Total pass-ed requests seen 
    MAIN.s_fetch     1206185  17.08 Total backend fetches initiated 
    MAIN.s_synth      0   0.00 Total synthethic responses made 
    MAIN.s_req_hdrbytes  4609529442  65278.77 Request header bytes 
    MAIN.s_req_bodybytes   615275   8.71 Request body bytes 
    MAIN.s_resp_hdrbytes  2888897421  40911.69 Response header bytes 
    MAIN.s_resp_bodybytes 95537815896 1352977.72 Response body bytes 
    MAIN.s_pipe_hdrbytes   57835   0.82 Pipe request header bytes 
    MAIN.s_pipe_in     45698   0.65 Piped bytes from client 
    MAIN.s_pipe_out    1305816  18.49 Piped bytes to client 
    MAIN.sess_closed    24747   0.35 Session Closed 
    MAIN.sess_pipeline     0   0.00 Session Pipeline 
    MAIN.sess_readahead    0   0.00 Session Read Ahead 
    MAIN.sess_herd    813362  11.52 Session herd 
    MAIN.shm_records   512140676  7252.78 SHM records 
    MAIN.shm_writes    18119050  256.60 SHM writes 
    MAIN.shm_flushes    788498  11.17 SHM flushes due to overflow 
    MAIN.shm_cont     89983   1.27 SHM MTX contention 
    MAIN.shm_cycles     277   0.00 SHM cycles through buffer 
    MAIN.sms_nreq      0   0.00 SMS allocator requests 
    MAIN.sms_nobj      0   . SMS outstanding allocations 
    MAIN.sms_nbytes     0   . SMS outstanding bytes 
    MAIN.sms_balloc     0   . SMS bytes allocated 
    MAIN.sms_bfree      0   . SMS bytes freed 
    MAIN.backend_req    1298448  18.39 Backend requests made 
    MAIN.n_vcl       1   0.00 Number of loaded VCLs in total 
    MAIN.n_vcl_avail     1   0.00 Number of VCLs available 
    MAIN.n_vcl_discard     0   0.00 Number of discarded VCLs 
    MAIN.bans       1   . Count of bans 
    MAIN.bans_completed    1   . Number of bans marked 'completed' 
    MAIN.bans_obj      0   . Number of bans using obj.* 
    MAIN.bans_req      0   . Number of bans using req.* 
    MAIN.bans_added     1   0.00 Bans added 
    MAIN.bans_deleted     0   0.00 Bans deleted 
    MAIN.bans_tested     0   0.00 Bans tested against objects (lookup) 
    MAIN.bans_obj_killed    0   0.00 Objects killed by bans (lookup) 
    MAIN.bans_lurker_tested   0   0.00 Bans tested against objects (lurker) 
    MAIN.bans_tests_tested    0   0.00 Ban tests tested against objects (lookup) 
    MAIN.bans_lurker_tests_tested   0   0.00 Ban tests tested against objects (lurker) 
    MAIN.bans_lurker_obj_killed    0   0.00 Objects killed by bans (lurker) 
    MAIN.bans_dups       0   0.00 Bans superseded by other bans 
    MAIN.bans_lurker_contention    0   0.00 Lurker gave way for lookup 
    MAIN.bans_persisted_bytes    13   . Bytes used by the persisted ban lists 
    MAIN.bans_persisted_fragmentation   0   . Extra bytes in persisted ban lists due to fragmentation 
    MAIN.n_purges        0   . Number of purge operations executed 
    MAIN.n_obj_purged       0   . Number of purged objects 
    MAIN.exp_mailed       949607  13.45 Number of objects mailed to expiry thread 
    MAIN.exp_received      949607  13.45 Number of objects received by expiry thread 
    MAIN.hcb_nolock      6001341  84.99 HCB Lookups without lock 
    MAIN.hcb_lock       447556   6.34 HCB Lookups with lock 
    MAIN.hcb_insert       447556   6.34 HCB Inserts 
    MAIN.esi_errors        0   0.00 ESI parse errors (unlock) 
    MAIN.esi_warnings       0   0.00 ESI parse warnings (unlock) 
    MAIN.vmods         2   . Loaded VMODs 
    MAIN.n_gzip        558747   7.91 Gzip operations 
    MAIN.n_gunzip       22866   0.32 Gunzip operations 
    MAIN.vsm_free       970832   . Free VSM space 
    MAIN.vsm_used       83963776   . Used VSM space 
    MAIN.vsm_cooling        0   . Cooling VSM space 
    MAIN.vsm_overflow       0   . Overflow VSM space 
    MAIN.vsm_overflowed       0   0.00 Overflowed VSM space 
    MGT.uptime        70613   1.00 Management process uptime 
    MGT.child_start        1   0.00 Child process started 
    MGT.child_exit        0   0.00 Child process normal exit 
    MGT.child_stop        0   0.00 Child process unexpected exit 
    MGT.child_died        0   0.00 Child process died (signal) 
    MGT.child_dump        0   0.00 Child process core dumped 
    MGT.child_panic        0   0.00 Child process panic 
    MEMPOOL.vbc.live       24   . In use 
    MEMPOOL.vbc.pool       10   . In Pool 
    MEMPOOL.vbc.sz_wanted      88   . Size requested 
    MEMPOOL.vbc.sz_needed      120   . Size allocated 
    MEMPOOL.vbc.allocs      19491   0.28 Allocations 
    MEMPOOL.vbc.frees      19467   0.28 Frees 
    MEMPOOL.vbc.recycle      19436   0.28 Recycled from pool 
    MEMPOOL.vbc.timeout      11998   0.17 Timed out from pool 
    MEMPOOL.vbc.toosmall       0   0.00 Too small to recycle 
    MEMPOOL.vbc.surplus       0   0.00 Too many for pool 
    MEMPOOL.vbc.randry       55   0.00 Pool ran dry 
    MEMPOOL.busyobj.live       6   . In use 
    MEMPOOL.busyobj.pool       9   . In Pool 
    MEMPOOL.busyobj.sz_wanted    65536   . Size requested 
    MEMPOOL.busyobj.sz_needed    65568   . Size allocated 
    MEMPOOL.busyobj.allocs     1298643  18.39 Allocations 
    MEMPOOL.busyobj.frees     1298637  18.39 Frees 
    MEMPOOL.busyobj.recycle    1298436  18.39 Recycled from pool 
    MEMPOOL.busyobj.timeout     41750   0.59 Timed out from pool 
    MEMPOOL.busyobj.toosmall      0   0.00 Too small to recycle 
    MEMPOOL.busyobj.surplus      0   0.00 Too many for pool 
    MEMPOOL.busyobj.randry      207   0.00 Pool ran dry 
    MEMPOOL.req0.live       4   . In use 
    MEMPOOL.req0.pool       9   . In Pool 
    MEMPOOL.req0.sz_wanted     65536   . Size requested 
    MEMPOOL.req0.sz_needed     65568   . Size allocated 
    MEMPOOL.req0.allocs      423162   5.99 Allocations 
    MEMPOOL.req0.frees      423158   5.99 Frees 
    MEMPOOL.req0.recycle     423143   5.99 Recycled from pool 
    MEMPOOL.req0.timeout      36313   0.51 Timed out from pool 
    MEMPOOL.req0.toosmall      0   0.00 Too small to recycle 
    MEMPOOL.req0.surplus       0   0.00 Too many for pool 
    MEMPOOL.req0.randry       19   0.00 Pool ran dry 
    MEMPOOL.sess0.live       4   . In use 
    MEMPOOL.sess0.pool       9   . In Pool 
    MEMPOOL.sess0.sz_wanted     384   . Size requested 
    MEMPOOL.sess0.sz_needed     416   . Size allocated 
    MEMPOOL.sess0.allocs      21655   0.31 Allocations 
    MEMPOOL.sess0.frees      21651   0.31 Frees 
    MEMPOOL.sess0.recycle     21642   0.31 Recycled from pool 
    MEMPOOL.sess0.timeout     10076   0.14 Timed out from pool 
    MEMPOOL.sess0.toosmall      0   0.00 Too small to recycle 
    MEMPOOL.sess0.surplus      0   0.00 Too many for pool 
    MEMPOOL.sess0.randry      13   0.00 Pool ran dry 
    MEMPOOL.req1.live       6   . In use 
    MEMPOOL.req1.pool       9   . In Pool 
    MEMPOOL.req1.sz_wanted     65536   . Size requested 
    MEMPOOL.req1.sz_needed     65568   . Size allocated 
    MEMPOOL.req1.allocs      418423   5.93 Allocations 
    MEMPOOL.req1.frees      418417   5.93 Frees 
    MEMPOOL.req1.recycle     418406   5.93 Recycled from pool 
    MEMPOOL.req1.timeout      35924   0.51 Timed out from pool 
    MEMPOOL.req1.toosmall      0   0.00 Too small to recycle 
    MEMPOOL.req1.surplus       0   0.00 Too many for pool 
    MEMPOOL.req1.randry       17   0.00 Pool ran dry 
    MEMPOOL.sess1.live       6   . In use 
    MEMPOOL.sess1.pool       10   . In Pool 
    MEMPOOL.sess1.sz_wanted     384   . Size requested 
    MEMPOOL.sess1.sz_needed     416   . Size allocated 
    MEMPOOL.sess1.allocs      21659   0.31 Allocations 
    MEMPOOL.sess1.frees      21653   0.31 Frees 
    MEMPOOL.sess1.recycle     21645   0.31 Recycled from pool 
    MEMPOOL.sess1.timeout     10040   0.14 Timed out from pool 
    MEMPOOL.sess1.toosmall      0   0.00 Too small to recycle 
    MEMPOOL.sess1.surplus      0   0.00 Too many for pool 
    MEMPOOL.sess1.randry      14   0.00 Pool ran dry 
    SMA.s0.c_req       1349644  19.11 Allocator requests 
    SMA.s0.c_fail        0   0.00 Allocator failures 
    SMA.s0.c_bytes      22210720975 314541.53 Bytes allocated 
    SMA.s0.c_freed      19148751042 271178.83 Bytes freed 
    SMA.s0.g_alloc       105992   . Allocations outstanding 
    SMA.s0.g_bytes      3061969933   . Bytes outstanding 
    SMA.s0.g_space      2306739187   . Bytes available 
    SMA.Transient.c_req     1343403  19.02 Allocator requests 
    SMA.Transient.c_fail       0   0.00 Allocator failures 
    SMA.Transient.c_bytes    24207322266 342816.79 Bytes allocated 
    SMA.Transient.c_freed    24201619786 342736.04 Bytes freed 
    SMA.Transient.g_alloc      7499   . Allocations outstanding 
    SMA.Transient.g_bytes     5702480   . Bytes outstanding 
    SMA.Transient.g_space      0   . Bytes available 
    VBE.backend1([IP]).vcls   1   . VCL references 
    VBE.backend1([IP]).happy 18446744073709551615   . Happy health probes 
    VBE.backend1([IP]).bereq_hdrbytes 173320400  2454.51 Request header bytes 
    VBE.backend1([IP]).bereq_bodybytes  238971   3.38 Request body bytes 
    VBE.backend1([IP]).beresp_hdrbytes 124704577  1766.03 Response header bytes 
    VBE.backend1([IP]).beresp_bodybytes 19209840560 272043.97 Response body bytes 
    VBE.backend1([IP]).pipe_hdrbytes   14288   0.20 Pipe request header bytes 
    VBE.backend1([IP]).pipe_out     8069   0.11 Piped bytes to backend 
    VBE.backend1([IP]).pipe_in    335893   4.76 Piped bytes from backend 
    VBE.backend2([IP]).vcls      1   . VCL references 
    VBE.backend2([IP]).happy   18446744073709551615   . Happy health probes 
    VBE.backend2([IP]).bereq_hdrbytes 173379423  2455.35 Request header bytes 
    VBE.backend2([IP]).bereq_bodybytes  76962   1.09 Request body bytes 
    VBE.backend2([IP]).beresp_hdrbytes 124652416  1765.29 Response header bytes 
    VBE.backend2([IP]).beresp_bodybytes 18690565655 264690.15 Response body bytes 
    VBE.backend2([IP]).pipe_hdrbytes   14629   0.21 Pipe request header bytes 
    VBE.backend2([IP]).pipe_out     8558   0.12 Piped bytes to backend 
    VBE.backend2([IP]).pipe_in    284168   4.02 Piped bytes from backend 
    VBE.backend3([IP]).vcls      1   . VCL references 
    VBE.backend3([IP]).happy   18446744073709551615   . Happy health probes 
    VBE.backend3([IP]).bereq_hdrbytes  173448768  2456.33 Request header bytes 
    VBE.backend3([IP]).bereq_bodybytes   75294   1.07 Request body bytes 
    VBE.backend3([IP]).beresp_hdrbytes  124641060  1765.13 Response header bytes 
    VBE.backend3([IP]).beresp_bodybytes 18738758550 265372.64 Response body bytes 
    VBE.backend3([IP]).pipe_hdrbytes   17940   0.25 Pipe request header bytes 
    VBE.backend3([IP]).pipe_out    16704   0.24 Piped bytes to backend 
    VBE.backend3([IP]).pipe_in    301836   4.27 Piped bytes from backend 
    VBE.backend4([IP]).vcls      1   . VCL references 
    VBE.backend4([IP]).happy   18446744073709551615   . Happy health probes 
    VBE.backend4([IP]).bereq_hdrbytes  173265357  2453.73 Request header bytes 
    VBE.backend4([IP]).bereq_bodybytes  227023   3.22 Request body bytes 
    VBE.backend4([IP]).beresp_hdrbytes  124724341  1766.31 Response header bytes 
    VBE.backend4([IP]).beresp_bodybytes 18642562615 264010.35 Response body bytes 
    VBE.backend4([IP]).pipe_hdrbytes   22537   0.32 Pipe request header bytes 
    VBE.backend4([IP]).pipe_out    12367   0.18 Piped bytes to backend 
    VBE.backend4([IP]).pipe_in    383919   5.44 Piped bytes from backend 
    LCK.sma.creat                    2   0.00 Created locks 
    LCK.sma.destroy                   0   0.00 Destroyed locks 
    LCK.sma.locks                  5272603  74.67 Lock Operations 
LCK.hcb.locks                   837080  11.85 Lock Operations 
    LCK.hcl.creat                    0   0.00 Created locks 
    LCK.hcl.destroy                   0   0.00 Destroyed locks 
    LCK.hcl.locks                    0   0.00 Lock Operations 
    LCK.vcl.creat                    1   0.00 Created locks 
    LCK.vcl.destroy                   0   0.00 Destroyed locks 
    LCK.vcl.locks                  2604799  36.89 Lock Operations 
    LCK.sess.creat                   43306   0.61 Created locks 
    LCK.sess.destroy                  43304   0.61 Destroyed locks 
    LCK.sess.locks                   4765   0.07 Lock Operations 
    LCK.wstat.creat                   1   0.00 Created locks 
    LCK.wstat.destroy                   0   0.00 Destroyed locks 
    LCK.wstat.locks                  3421304  48.45 Lock Operations 
    LCK.wq.locks                   6449131  91.33 Lock Operations 
    LCK.objhdr.creat                  447638   6.34 Created locks 
    LCK.objhdr.destroy                 387644   5.49 Destroyed locks 
    LCK.objhdr.locks                 42615452  603.51 Lock Operations 
    LCK.exp.creat                    1   0.00 Created locks 
    LCK.exp.destroy                   0   0.00 Destroyed locks 
    LCK.exp.locks                  4462106  63.19 Lock Operations 
    LCK.lru.creat                    2   0.00 Created locks 
    LCK.lru.destroy                   0   0.00 Destroyed locks 
    LCK.lru.locks                  4259219  60.32 Lock Operations 
    LCK.cli.creat                    1   0.00 Created locks 
    LCK.cli.destroy                   0   0.00 Destroyed locks 
    LCK.cli.locks                   23549   0.33 Lock Operations 
    LCK.ban.creat                    1   0.00 Created locks 
    LCK.ban.destroy                   0   0.00 Destroyed locks 
    LCK.ban.locks                  1594008  22.57 Lock Operations 
    LCK.vbp.creat                    1   0.00 Created locks 
    LCK.vbp.destroy                   0   0.00 Destroyed locks 
    LCK.vbp.locks                   112922   1.60 Lock Operations 
LCK.backend.locks                 2626872  37.20 Lock Operations 
    LCK.vcapace.creat                   1   0.00 Created locks 

*不得不刪除從日誌一些(平靜份)由於消息限制。

在這裏,我通過client_req - (cache_hit + cache_hitpass + cache_miss)= 436880計算了「其他非緩存請求」。隨着cache_hitpass和cache未命中(或僅僅是client_req-cache-hit)總計586699,約18.7%。如果他們是沉重的請求,我想這可以解釋流量。但我有點不服氣。

回答

0

您是否試圖查看其中一臺服務器的訪問日誌?也許你有沒有通過光油進入的要求?

而且,我知道,至少在光油3,緩存的命中率從命中該可以來自高速緩存開始計算。例如,對於標準配置,POST請求不計入此處。

訪問日誌可以給你一些見解...

+0

因爲我清漆服務器上運行iftop的,我知道,從後端服務器的流量來清漆服務器,而不是向一些其他的服務器標題。然而,我將不得不檢查POST請求的日誌。有沒有辦法在varnishstat中發現這些請求? – sonicseaweed

+0

根據日誌,我每秒獲得大約0.2次POST請求。 (我每秒總共獲得約250個請求) – sonicseaweed

+0

我不知道在varnishstat中發現這種請求的方法......您是否嘗試查看後端服務器之一的訪問日誌? – obe