summaryrefslogtreecommitdiff
path: root/crypto
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-11-23 08:44:56 -0800
committerDavid S. Miller <davem@davemloft.net>2016-11-25 19:37:49 -0500
commitf52dffe049ee11ecc02588a118fbe4092672fbaa (patch)
treea5532c4e1afd9e1a48a3c4a6f17f69af91344fe4 /crypto
parentca89fa77b4488ecf2e3f72096386e8f3a58fe2fc (diff)
downloadlinux-f52dffe049ee11ecc02588a118fbe4092672fbaa.tar.gz
linux-f52dffe049ee11ecc02588a118fbe4092672fbaa.tar.xz
net: properly flush delay-freed skbs
Typical NAPI drivers use napi_consume_skb(skb) at TX completion time. This put skb in a percpu special queue, napi_alloc_cache, to get bulk frees. It turns out the queue is not flushed and hits the NAPI_SKB_CACHE_SIZE limit quite often, with skbs that were queued hundreds of usec earlier. I measured this can take ~6000 nsec to perform one flush. __kfree_skb_flush() can be called from two points right now : 1) From net_tx_action(), but only for skbs that were queued to sd->completion_queue. -> Irrelevant for NAPI drivers in normal operation. 2) From net_rx_action(), but only under high stress or if RPS/RFS has a pending action. This patch changes net_rx_action() to perform the flush in all cases and after more urgent operations happened (like kicking remote CPUS for RPS/RFS). Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions