block: use current active bfqq to update statistics

Use the current active bfq-queue to update bfq-group statitics.

It could be possible that the current active serving bfq-queue can
expire if the allocated time/budget for the queue got expired.
During this time, it will select a new queue that to be served
for its service tree.

If there were no more queues to be served, then it will choose a
next group of queues to be served from its group service tree.

So, the selection of the new request from its new group and queue
are updated via __bfq_dispatch_request() fn.

As "in_serv_queue" variable is not updated again the group
associated with "in_serv_queue" queue can be freed, if there were
no more active queues.

So, with picking in_serv_queue as active queue, and updating its group
statistics one will see a kernel panic as below.

[  120.572960] Hardware name: Qualcomm Technologies, Inc. Lito MTP (DT)
[  120.572973] Workqueue: kblockd blk_mq_run_work_fn
[  120.572979] pstate: a0c00085 (NzCv daIf +PAN +UAO)
[  120.572987] pc : bfqg_stats_update_idle_time+0x14/0x50
[  120.572992] lr : bfq_dispatch_request+0x398/0x948

[  121.185249] Call trace:
[  121.187772]  bfqg_stats_update_idle_time+0x14/0x50
[  121.192700]  bfq_dispatch_request+0x398/0x948
[  121.197187]  blk_mq_do_dispatch_sched+0x84/0x118
[  121.198270] CPU7: update max cpu_capacity 1024
[  121.206504]  blk_mq_sched_dispatch_requests+0x130/0x190
[  121.211873]  __blk_mq_run_hw_queue+0xcc/0x148
[  121.216359]  blk_mq_run_work_fn+0x24/0x30
[  121.220489]  process_one_work+0x328/0x6b0
[  121.224619]  worker_thread+0x330/0x4d0
[  121.228475]  kthread+0x128/0x138
[  121.231806]  ret_from_fork+0x10/0x1c

To avoid this, always use the current active bfq-queue, which is
derived from the current active serving request.

Change-Id: I51d5b9d2020da9f3a3a31378b06257463afd08eb
Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
This commit is contained in:
Pradeep P V K 2019-08-30 13:03:32 +05:30
parent 28ad72b24d
commit 1852d0be8d

View file

@ -3938,7 +3938,6 @@ exit:
#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
static void bfq_update_dispatch_stats(struct request_queue *q,
struct request *rq,
struct bfq_queue *in_serv_queue,
bool idle_timer_disabled)
{
struct bfq_queue *bfqq = rq ? RQ_BFQQ(rq) : NULL;
@ -3960,17 +3959,15 @@ static void bfq_update_dispatch_stats(struct request_queue *q,
* bfqq_group(bfqq) exists as well.
*/
spin_lock_irq(q->queue_lock);
if (idle_timer_disabled)
if (bfqq && idle_timer_disabled)
/*
* Since the idle timer has been disabled,
* in_serv_queue contained some request when
* __bfq_dispatch_request was invoked above, which
* implies that rq was picked exactly from
* in_serv_queue. Thus in_serv_queue == bfqq, and is
* therefore guaranteed to exist because of the above
* arguments.
* It could be possible that current active
* queue and group might got updated along with
* request via. __bfq_dispatch_request.
* So, always use current active request to
* derive its associated bfq queue and group.
*/
bfqg_stats_update_idle_time(bfqq_group(in_serv_queue));
bfqg_stats_update_idle_time(bfqq_group(bfqq));
if (bfqq) {
struct bfq_group *bfqg = bfqq_group(bfqq);
@ -3983,7 +3980,6 @@ static void bfq_update_dispatch_stats(struct request_queue *q,
#else
static inline void bfq_update_dispatch_stats(struct request_queue *q,
struct request *rq,
struct bfq_queue *in_serv_queue,
bool idle_timer_disabled) {}
#endif
@ -4006,7 +4002,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
spin_unlock_irq(&bfqd->lock);
bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
bfq_update_dispatch_stats(hctx->queue, rq,
idle_timer_disabled);
return rq;