Re: [RFC PATCH 07/11] sched: Add proxy execution
From: Joel Fernandes
Date: Sun Nov 20 2022 - 23:00:05 EST
On Sun, Nov 20, 2022 at 08:49:22PM -0500, Joel Fernandes wrote:
> On Sun, Nov 20, 2022 at 7:22 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> >
> > Hello Dietmar,
> >
> > On Fri, Nov 04, 2022 at 06:09:26PM +0100, Dietmar Eggemann wrote:
> > > On 31/10/2022 19:00, Joel Fernandes wrote:
> > > > On Mon, Oct 31, 2022 at 05:39:45PM +0100, Dietmar Eggemann wrote:
> > > >> On 29/10/2022 05:31, Joel Fernandes wrote:
> > > >>> Hello Dietmar,
> > > >>>
> > > >>>> On Oct 24, 2022, at 6:13 AM, Dietmar Eggemann <dietmar.eggemann@xxxxxxx> wrote:
> > > >>>>
> > > >>>> On 03/10/2022 23:44, Connor O'Brien wrote:
> > > >>>>> From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> > >
> > > [...]
> > >
> > > >>>>> + rq_unpin_lock(rq, rf);
> > > >>>>> + raw_spin_rq_unlock(rq);
> > > >>>>
> > > >>>> Don't we run into rq_pin_lock()'s:
> > > >>>>
> > > >>>> SCHED_WARN_ON(rq->balance_callback && rq->balance_callback !=
> > > >>>> &balance_push_callback)
> > > >>>>
> > > >>>> by releasing rq lock between queue_balance_callback(, push_rt/dl_tasks)
> > > >>>> and __balance_callbacks()?
> > > >>>
> > > >>> Apologies, I’m a bit lost here. The code you are responding to inline does not call rq_pin_lock, it calls rq_unpin_lock. So what scenario does the warning trigger according to you?
> > > >>
> > > >> True, but the code which sneaks in between proxy()'s
> > > >> raw_spin_rq_unlock(rq) and raw_spin_rq_lock(rq) does.
> > > >>
> > > >
> > > > Got it now, thanks a lot for clarifying. Can this be fixed by do a
> > > > __balance_callbacks() at:
> > >
> > > I tried the:
> > >
> > > head = splice_balance_callbacks(rq)
> > > task_rq_unlock(rq, p, &rf);
> > > ...
> > > balance_callbacks(rq, head);
> > >
> > > separation known from __sched_setscheduler() in __schedule() (right
> > > after pick_next_task()) but it doesn't work. Lot of `BUG: scheduling
> > > while atomic:`
> >
> > How about something like the following? This should exclude concurrent
> > balance callback queues from other CPUs and let us release the rq lock early
> > in proxy(). I ran locktorture with your diff to make writer threads RT, and I
> > cannot reproduce any crash with it:
> >
> > ---8<-----------------------
> >
> > From: "Joel Fernandes (Google)" <joel@xxxxxxxxxxxxxxxxx>
> > Subject: [PATCH] Exclude balance callback queuing during proxy's migrate
> >
> > In commit 565790d28b1e ("sched: Fix balance_callback()"), it is clear that rq
> > lock needs to be held when __balance_callbacks() in schedule() is called.
> > However, it is possible that because rq lock is dropped in proxy(), another
> > CPU, say in __sched_setscheduler() can queue balancing callbacks and cause
> > issues.
> >
> > To remedy this, exclude balance callback queuing on other CPUs, during the
> > proxy().
> >
> > Reported-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> > ---
> > kernel/sched/core.c | 15 +++++++++++++++
> > kernel/sched/sched.h | 3 +++
> > 2 files changed, 18 insertions(+)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a5fa34dc06..f1dac21fcd90 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -6739,6 +6739,10 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
> > p->wake_cpu = wake_cpu;
> > }
> >
> > + // Prevent other CPUs from queuing balance callbacks while we migrate
> > + // tasks in the migrate_list with the rq lock released.
> > + raw_spin_lock(&rq->balance_lock);
> > +
> > rq_unpin_lock(rq, rf);
> > raw_spin_rq_unlock(rq);
> > raw_spin_rq_lock(that_rq);
> > @@ -6758,7 +6762,18 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
> > }
> >
> > raw_spin_rq_unlock(that_rq);
> > +
> > + // This may make lockdep unhappy as we acquire rq->lock with balance_lock
> > + // held. But that should be a false positive, as the following pattern
> > + // happens only on the current CPU with interrupts disabled:
> > + // rq_lock()
> > + // balance_lock();
> > + // rq_unlock();
> > + // rq_lock();
> > raw_spin_rq_lock(rq);
>
> Hmm, I think there's still a chance of deadlock here. I need to
> rethink it a bit, but that's the idea I was going for.
Took care of that, and came up with the below. Tested with locktorture and it
survives. Thoughts?
---8<-----------------------
From: "Joel Fernandes (Google)" <joel@xxxxxxxxxxxxxxxxx>
Subject: [PATCH v2] Exclude balance callback queuing during proxy's migrate
In commit 565790d28b1e ("sched: Fix balance_callback()"), it is clear that rq
lock needs to be held when __balance_callbacks() in schedule() is called.
However, it is possible that because rq lock is dropped in proxy(), another
CPU, say in __sched_setscheduler() can queue balancing callbacks and cause
issues.
To remedy this, exclude balance callback queuing on other CPUs, during the
proxy().
Reported-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
---
kernel/sched/core.c | 72 ++++++++++++++++++++++++++++++++++++++++++--
kernel/sched/sched.h | 3 ++
2 files changed, 73 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 88a5fa34dc06..aba90b3dc3ef 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -633,6 +633,29 @@ struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
}
}
+/*
+ * Helper to call __task_rq_lock safely, in scenarios where we might be about to
+ * queue a balance callback on a remote CPU. That CPU might be in proxy(), and
+ * could have released its rq lock while holding balance_lock. So release rq
+ * lock in such a situation to avoid deadlock, and retry.
+ */
+struct rq *__task_rq_lock_balance(struct task_struct *p, struct rq_flags *rf)
+{
+ struct rq *rq;
+ bool locked = false;
+
+ do {
+ if (locked) {
+ __task_rq_unlock(rq, rf);
+ cpu_relax();
+ }
+ rq = __task_rq_lock(p, rf);
+ locked = true;
+ } while (raw_spin_is_locked(&rq->balance_lock));
+
+ return rq;
+}
+
/*
* task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
*/
@@ -675,6 +698,29 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
}
}
+/*
+ * Helper to call task_rq_lock safely, in scenarios where we might be about to
+ * queue a balance callback on a remote CPU. That CPU might be in proxy(), and
+ * could have released its rq lock while holding balance_lock. So release rq
+ * lock in such a situation to avoid deadlock, and retry.
+ */
+struct rq *task_rq_lock_balance(struct task_struct *p, struct rq_flags *rf)
+{
+ struct rq *rq;
+ bool locked = false;
+
+ do {
+ if (locked) {
+ task_rq_unlock(rq, p, rf);
+ cpu_relax();
+ }
+ rq = task_rq_lock(p, rf);
+ locked = true;
+ } while (raw_spin_is_locked(&rq->balance_lock));
+
+ return rq;
+}
+
/*
* RQ-clock updating methods:
*/
@@ -6739,6 +6785,12 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
p->wake_cpu = wake_cpu;
}
+ /*
+ * Prevent other CPUs from queuing balance callbacks while we migrate
+ * tasks in the migrate_list with the rq lock released.
+ */
+ raw_spin_lock(&rq->balance_lock);
+
rq_unpin_lock(rq, rf);
raw_spin_rq_unlock(rq);
raw_spin_rq_lock(that_rq);
@@ -6758,7 +6810,21 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
}
raw_spin_rq_unlock(that_rq);
+
+ /*
+ * This may make lockdep unhappy as we acquire rq->lock with
+ * balance_lock held. But that should be a false positive, as the
+ * following pattern happens only on the current CPU with interrupts
+ * disabled:
+ * rq_lock()
+ * balance_lock();
+ * rq_unlock();
+ * rq_lock();
+ */
raw_spin_rq_lock(rq);
+
+ raw_spin_unlock(&rq->balance_lock);
+
rq_repin_lock(rq, rf);
return NULL; /* Retry task selection on _this_ CPU. */
@@ -7489,7 +7555,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
if (p->pi_top_task == pi_task && prio == p->prio && !dl_prio(prio))
return;
- rq = __task_rq_lock(p, &rf);
+ rq = __task_rq_lock_balance(p, &rf);
update_rq_clock(rq);
/*
* Set under pi_lock && rq->lock, such that the value can be used under
@@ -8093,7 +8159,8 @@ static int __sched_setscheduler(struct task_struct *p,
* To be able to change p->policy safely, the appropriate
* runqueue lock must be held.
*/
- rq = task_rq_lock(p, &rf);
+ rq = task_rq_lock_balance(p, &rf);
+
update_rq_clock(rq);
/*
@@ -10312,6 +10379,7 @@ void __init sched_init(void)
rq = cpu_rq(i);
raw_spin_lock_init(&rq->__lock);
+ raw_spin_lock_init(&rq->balance_lock);
rq->nr_running = 0;
rq->calc_load_active = 0;
rq->calc_load_update = jiffies + LOAD_FREQ;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 354e75587fed..932d32bf9571 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1057,6 +1057,7 @@ struct rq {
unsigned long cpu_capacity_orig;
struct callback_head *balance_callback;
+ raw_spinlock_t balance_lock;
unsigned char nohz_idle_balance;
unsigned char idle_balance;
@@ -1748,6 +1749,7 @@ queue_balance_callback(struct rq *rq,
void (*func)(struct rq *rq))
{
lockdep_assert_rq_held(rq);
+ raw_spin_lock(&rq->balance_lock);
/*
* Don't (re)queue an already queued item; nor queue anything when
@@ -1760,6 +1762,7 @@ queue_balance_callback(struct rq *rq,
head->func = (void (*)(struct callback_head *))func;
head->next = rq->balance_callback;
rq->balance_callback = head;
+ raw_spin_unlock(&rq->balance_lock);
}
#define rcu_dereference_check_sched_domain(p) \
--
2.38.1.584.g0f3c55d4c2-goog