thread_nomigrate()

thread_nomigrate(): Environmentally friendly prevention of kernel thread migration

The launch of OpenSolaris today means that as a Solaris developer I can take the voice that blogs.sun.com has already given me and talk not just in general about aspects of Solaris in which I work but in detail and with source freely quoted and referenced as I wish!  We've come a long way - who'd have thought several years ago that employees (techies, even!) would have the freedom to discuss in public what we do for a living in the corporate world (as blogs.sun.com has delivered for some time now) and now, with OpenSolaris, not just talk in general about subject matter but also discuss the design and implementation.  Fabulous!

I thought I'd start by describing a kernel-private interface I added in Solaris 10 which can be used to request short-term prevention of a kernel thread from migrating between processors.  Thread migration refers to a thread changing processors - running on one processor until preemption or blocking and then resuming on a different processor.  A description of thread_nomigrate (the new interface) soon turns into a mini tour of some aspects of the dispatcher (I don't work in dispatcher land much, I just have an interest in the area, and I had a project that required this functionality).

A Quick Overview of Processor Selection

I'm not going to attempt a niity-gritty detailed story here - just enough for the discussion below.

The kthread_t member t_state tracks the current run state of a kernel thread.  State TS_ONPROC indicates that a thread is currently running on a processor.  This state is always preceded by state TS_RUN - runnable but not yet on a processor.  Threads in state TS_RUN are enqueued on various dispatch queues; each processor has a bunch of dispatch queues (one for every priority level) and there are other global dispatch queues such as the partition-wide preemption queue.  All enqueuing to dispatch queues is performed by the dispatcher workhorses setfrontdq and setbackdq.  It is these functions which honour processor and processor-set binding requests or call cpu_choose to select the best processor to enqueue on.  When a thread is enqueued on a dispatch queue of some processor it is nominally aimed at being run on that processor, and in most cases will be;  however idle processors may choose to run suitable threads initially dispatched to other processors. Eric Saxe has described a lot more of the operation of the dispatcher and scheduler in his opening day blog.

Requirements for "Weak Binding"

There were already a number of ways of avoiding migration (for a thread not already permanently bound, such as an interrupt thread):

  • Raise IPL to above LOCK_LEVEL.

    Not something you want to do for more than a moment, but it is one way to avoid being preempted and hence also to avoid migration (for as long as the state persists).  Not suitable for general use.

  • processor_bind System Call.

    processor_bind implements the corresponding system call which may be used directly from applications or could be the result of a use of pbind(1M).  It acquires cpu_lock and uses cpu_bind_{thread,process,task,project,zone,contract} depending on the arguments. Function thread_bind locks the current thread and records the new user-selected binding by processor id in t_bind_cpu of the kthread structure and again but by cpu structure address in t_bound_cpu, and then requeues the thread if it was waiting on a dispatch queue somewhere (thread state TS_RUN) or poke it off of cpu if it is currently on cpu (possibly not the one to which we've just bound it) to force it through the dispatcher at which point the new binding will take effect (it will be noticed in setfrontdq/setbackdq).  The others - cpu_bind_process etc - are built on top of cpu_bind_thread and on each-other.

  • thread_affinity_set(kthread_id_t t,int cpu_id) and thread_affinity_clear(kthread_id_t).

    The artist previously known as affinity_set (and still available as that for compatability), used to request a system-specified (as opposed to userland-specified) binding.  Again this requires that cpu_lock be held (or it acquires it for you if cpu_id is specified as CPU_CURRENT).  It locks the indicated thread (note that it might not be curthread) and sets a hard affinity for the requested (or current) processor by incrementing t_affinitycnt and setting t_bound_cpu in the kthread structure.  The hard affinity count will prevent any processor_bind initiated requests from succeeding.  Finally it forces the target thread through the dispatcher if necessary (so that the requested binding may be honoured).

  • kprempt_disable() and kpreempt_enable().

    This actually prevents processor migration as a bonus side-effect of disabling preemption.  It is extremely lightweight and usable from any context (well, any where you could ever care about migration); in particular it does not require cpu_lock at all and can be called regardless of IPL and from interrupt context.

    To prevent preemption kpreempt_disable simply increments curthread->t_preempt.  To re-enable preemption this count is decremented.  Uses may be nested so preemption is only possible again when the count returns to zero.  When the count is decremented to zero we must also check for any preemption requests we ignored while preemption was disabled - i.e., whether cpu_kprunrun is set for the current processor - and call kpreempt synchronously now if so.  To understand how that prevents preemption you need to understand a little more of how preemption works in Solaris.  To preempt a thread running on cpu we set cpu_kprunrun for the processor it is on and "poke" that with a null interrupt whereafter return-from-interrupt processing will notice the flag set and call kpreempt.  It is in kpreempt that we consult t_preempt to see if preemption has been temporarily disabled;  if it is then the request is ignored for now and actioned only when preemption is re-enabled.

    Since a thread already running on one processor can only migrate to a new processor if we can get it off the current processor, disabling preemption has a bonus side-effect of preventing migration.  If, however, a thread with preemption disabled performs some operation that causes the thread to sleep (which would be legal but silly - why accept sleeping if you're asking not to be bumped from processor) then it may be migrated on resume since no part of the set{font,back}dq or cpu_choose code consults t_preempt.

    There is one big disadvantage to using kpreempt_disable.  It, errr, disables preemption which may interfere with the dispatch latency for other threads - preemption should only ever be disabled for a tiny window so that the thread can be pushed out of the way for higher priority threads (especially for realtime threads for which dispatch latency must be bounded).

Thus we already had userland-requested processor long-term binding to a specific processor (or set) via processor_bind, system requested long-term binding to a specific processor via thread_affinity_set, and system-requested short-term "binding" (as in "don't kick me off processor") via kpreempt_disable

I was modifying kernel bcopy, copyin, copyout and hwblkpagecopy code (see cheetah_copy.s) to add a new hardware test feature which would require that hardware-accelerated copies (bigger copies use the floating point unit and the prefetch cache to speed copy) run on the same processor throughout the copy (even if preempted for a while in mid copy by a higher priority thread in mid-copy).  I could not use processor_bind (non-starter, it's for user specified binding), nor thread_affinity_set which requires cpu_lock (bcopy(9F) can be called from interrupt context including high level interrupt.  That left kpreempt_disable which, although beautifully light-weight, could not be used for more than a moment without introducing realtime dispatch glitches - and copies (although accelerated) can be very large.  I needed a thread_nomigrate which would stop a kernel thread from migrating from the current processor (whichever you happened to be on when called) but would still allow the thread to be preempted, which was reasonably light-weight (copy code is performance critical), and which had few restrictions on caller context (no more than copy code).  Sounded simple enough!

Some Terminology

I'll refer to threads that are bound to a processor with t_bound_cpu set as being strongly bound.  The processor_bind and thread_affinity_set interfaces produce strong bindings in this sense.  This isn't traditional terminology - none was necessary - but we'll see that the new interface introduces weak binding so I had to call the existing mechanism something.

Processor Offlining

Another requirement of the proposed interface was that it must not interfere with processor offlining.  A quick look at cpu_offline source shows that it fails if there are threads that are strongly bound to the target processor - it waits a short interval to allow any such bindings to drop but if there are any remaining thereafter (no new binding can occur while it waits as cpu_lock is held) the offline attempt fails.  The new interface was required to work more like kpreempt_disable does - not interfere with offlining at all.  kpreempt_disable achieves this through resisting the attempt to preempt the thread with the high-priority per-cpu pause thread - cpu_offline waits until all cpus are running their pause thread so a kpreempt_disable just makes it wait a tiny bit longer.  For the new mechanism, however, we could not acquire cpu_lock as a barrier to preventing new weak bindings (as used in cpu_offline for strong bindings) and the whole point of the new mechanism is not to interfere with preemption so I could not use that method, either.

No Blocking Allowed

As mentioned above, kpreempt_disable does not assure no-migrate semantics if the thread voluntarily gives up cpu.  Since a sleep may take a while we don't want weak-bound threads sleeping as that would interfere with processor offlining.  So we'll outlaw sleeping.  This is no loss - if you can sleep then you can afford to use the full thread_affinity_set route.

Weak-binding Must Be Short-Term

Again to avoid interfering with processor offlining.  A weakbound thread which is preempted will necessarily be queued on the dispatch queues of the processor to which it is weakbound.  During attempted offline of a processor we will need to allow threads weakbound to that processor to drain - we must be sure that allowing threads in TS_RUN state to run a short while longer will be enough for them to complete their action and drop their weak binding.

Implementation

This turned out to be trickier than initially hoped, which explains some of the liberal commenting you'll find in the source!

void thread_nomigrate(void);

You can view the source to this new function here.  I'll discuss it in chunks below, leaving out the comments that you'll find in the source as I'll elaborate more here.

void
thread_nomigrate(void)
{
        cpu_t \*cp;
        kthread_id_t t = curthread;

again:
        kpreempt_disable();
        cp = CPU;

It is the "current" cpu to which we will bind.  To nail down exactly which that is (since we may migrate at any moment!) we must first disable migration and we do this in the simplest way possible.  We must re-enable preemption before returning (and only keep it disabled for a moment).

Note that since we have preemption disabled, any strong binding requests which happen in parallel on other cpus for this thread will not be able to poke us across to the strongbound cpu (which may be different to the one we're currently on).

        if (CPU_ON_INTR(cp) || t->t_flag & T_INTR_THREAD ||
            getpil() >= DISP_LEVEL) {
                kpreempt_enable();
                return;
        }

During a highlevel interrupt context the caller does not own the current thread structure and so should not make changes to it.  If we are a lowlevel interrupt thread then we can't migrate anyway.  If we're at high IPL then we also cannot migrate.  So we need take no action; in thread_allowmigrate we must perform a corresponding test.

        if (t->t_nomigrate && t->t_weakbound_cpu && t->t_weakbound_cpu != cp) {
                if (!panicstr)
                        panic("thread_nomigrate: binding to %p but already "
                            "bound to %p", (void \*)cp,
                            (void \*)t->t_weakbound_cpu);
        }

Some sanity checking that we've not already weakbound to a different cpu.  Weakbinding is recorded by writing the cpu address to the t_weakbound_cpu member and incrementing the t_nomigrate nesting count, as we'll see below.

        thread_lock(curthread);

Prior to this point we might be racing with a competing strong binding request running on another cpu (e.g., a pbind(1M) command line request on a process in copy code and requesting a weak binding).  But strong binding acquires the thread lock for the target thread, so we can synchronize (without blocking) by grabbing our thread lock.  Note that this restricts the context of callers to those for which grabbing the thread lock is appropriate.

        if (t->t_nomigrate < 0 || weakbindingbarrier && t->t_nomigrate == 0) {
                --t->t_nomigrate;
                thread_unlock(curthread);
                return;         /\* with kpreempt_disable still active \*/
        }

This was the result of an unfortunate interaction between the initial implementation and pool rebinding (see poolbind(1M)).  Pool bindings must succeed or fail atomically - either all threads are rebound in the request or none are (as described in Andrei's blog).  The rebinding code would acquire cpu_lock (preventing further strong bindings) and check that all rebindings could succeed;  but since cpu_lock does not affect weak binding it could later find that some thread refused the rebinding.  The fix involved introducing a mechanism by which weakbinding could, fleetingly, be upgraded to preemption disabling.  The weakbindingbarrier is raised and lowered by calls to weakbinding_{stop,start}.  If it is raised or this is a nested call and we've already gone the no-preempt route for this thread then we return with preemption disabled and signify/count this through negative counting in t_nomigrate.  The t_weakbound_cpu member will be left NULL.  Note that whimping out and selecting the stronger condition of disabling preemption to achieve no-migration semantics does not signicantly undermine the goal of never interfering with dispatch latency: if you are performing pool rebinding operations you expect a glitch as threads are moved.

It's possible that we are running on a different cpu to which we are strongbound - a strong binding request was made between the time we disabled preemption and when we acquired the thread lock.  We can still grant the weakbinding in this case, which will result in our weak binding being different to our strong binding!  This is not unhealthy as long as we allow the thread to gravitate towards its strongbound cpu as soon as the weakbinding drops (which will be soon since it is a short-term condition).  To favour weakbinding over any strong we will also require some changes in setfrontdq and setbackdq.

Weakbinding requests always succeed - there is no return value to indicate failure.  However we may sometimes want to delay granting a weakbinding request until we are running on a more suitable cpu.  Recall that a weakbinding simply prevents migration during the critical section, but does not nominate a particular cpu.  If our current cpu is the subject of an offline request then we will migrate the thread to another cpu and retry the weakbinding request there.  We do this to avoid the (admittedly unlikely) case that repeated weakbinding requests being made by a thread prevent it from offlining (remember that the strategy is that any weakbound threads waiting to run on an offline target will drop their binding if allowed to run for a moment longer - if new bindings are continually being made then that assumption is violated).

        if (cp != cpu_inmotion || t->t_nomigrate > 0 || t->t_preempt > 1 ||
            t->t_bound_cpu == cp) {
                t->t_nomigrate++;
                t->t_weakbound_cpu = cp;
                membar_producer();
                thread_unlock(curthread);
                kpreempt_enable();

We set cpu_inmotion during cpu_offline to record the target cpu.  If we're not currently on an offline target (the common case) or if we've already weakbound to this cpu (this is a nested call) or if we can't migrate away from this cpu because preemption is disabled or we're strongbound to it then go ahead and grant the weakbinding to this cpu by incrementing the nesting count and recording our weakbinding in t_weakbound_cpu (for the dispatcher).  Make these changes visible to the world before dropping the thread lock so that competing strong binding requests see the full view of the world.  Finally re-enable preemption, and we're done.

        } else {
                /\*
                 \* Move to another cpu before granting the request by
                 \* forcing this thread through preemption code.  When we
                 \* get to set{front,back}dq called from CL_PREEMPT()
                 \* cpu_choose() will be used to select a cpu to queue
                 \* us on - that will see cpu_inmotion and take
                 \* steps to avoid returning us to this cpu.
                 \*/
                cp->cpu_kprunrun = 1;
                thread_unlock(curthread);
                kpreempt_enable();      /\* will call preempt() \*/
                goto again;
        }
}

If we are the target of an offline request and are not obliged to grant the weakbinding to this cpu, then force ourselves onto another cpu.  The disptacher will lean away from the cpu_inmotion and we'll resume elsewhere and likely grant the binding there.  Who says goto can never be used?

void thread_allowmigrate(void);

This drops the weakbinding if the nesting count reduces to zero, but must also look out for the special cases made in thread_nomigrate.  Source may be viewed here.

void
thread_allowmigrate(void)
{
        kthread_id_t t = curthread;

        ASSERT(t->t_weakbound_cpu == CPU ||
            (t->t_nomigrate < 0 && t->t_preempt > 0) ||
            CPU_ON_INTR(CPU) || t->t_flag & T_INTR_THREAD ||
            getpil() >= DISP_LEVEL);

On DEBUG kernels check that all is operating as it should be. There's a story to tell here regarding cpr (checkpoint-resume power management) which I'll recount a little later.

        if (CPU_ON_INTR(CPU) || (t->t_flag & T_INTR_THREAD) ||
            getpil() >= DISP_LEVEL)
                return;

This corresponds to the beginning on thread_nomigrate for the case where we did not have to do anything to prevent migration.

        if (t->t_nomigrate < 0) {
                ++t->t_nomigrate;
                kpreempt_enable();

Negative nested counting in t_nomigrate indicates that we're resolving weakbinding requests by upgrading them to no-preemption semantics during pool rebinding.

        } else {
                kpreempt_disable();
                if (t->t_bound_cpu &&
                    t->t_weakbound_cpu != t->t_bound_cpu)
                        CPU->cpu_kprunrun = 1;
                t->t_weakbound_cpu = NULL;
                membar_producer();
                kpreempt_enable();
        }
}

If we decrement the nesting count to 0 then clear our weak binding recorded in t_weakbound_cpu.  If we are weakbound to a different cpu to which we are strongbound (as explained above) force a trip through preempt so that we can now drop all resistance and migrate.

Changes to setfrontdq and setbackdq

As outlined above it is these two functions which select dispatch queues on which to place threads that are in a runnable state (including threads preempted from cpu).  These functions already checked for strong binding of the thread being enqueued, so they required an additional check for weak binding.  As explained above it is sometimes possible that a thread be both strong and weak bound, normally to the same cpu but sometimes for a short time to different cpus - the changes should therefore favour weak binding over strong.

Changes to cpu_offline

The cpu_lock is held on calling cpu_offline, and that stops further strong bindings to the target (or any) cpu while we're in cpu_offline.  Except in special circumstances (of a failing cpu) a cpu with bound threads cannot be offlined;  if there are any strongbound threads then cpu_offline performs a brief delay loop to give them a chance to unbind and then fails if any remain.  The existence of strongbound threads is checked with disp_bound_threads and disp_bound_anythreads.

To meet the requirement that weakbinding not interfere with offlining we needed a similar mechanism to prevent any further weak bindings to the target cpu and a means of allowing existing weak bindings to drain; we must do this, however, without using a mutex or similar.

The solution was to introduce cpu_in_motion which would normally be NULL but would be set to the target cpu address when that cpu is being offlined.  Since this variable is not protected by any mutex some consideration of memory ordering in multiprocessor systems is required.  We force the store to cpu_in_motion to global visibility in cpu_offline so we know that no new loads (on other cpus) will see the old value after that point (we've "raised a barrier" to weak binding);  however loads already performed on other cpus may already have the old value (we're not synchronised in any way) so we have to be prepared for a thread running on the target cpu to still manage to weakbind just one last time in which case we repeat the loop to allow weakbound threads to drain and thereafter we know no further weakbindings could have occured since the barrier is long  since visible.  The weakbinding barrier cpu_inmotion is checked in thread_nomigrate and a thread trying to weakbind to the cpu that is the target of an offline request will go through preemption code to first migrate to another cpu.

A Twist In The Tail

I integrated thread_nomigrate along with the project that first required it into build 63 of Solaris 10.  A number of builds later a bug turned up in the case described above where a cpu may be temporarily weakbound to a different cpu to which it is strongbound.  In fixing that I modified the assertion test in thread_allowmigrate.  The test suite we had developed for the project was modified to cover the new case, and I put the changes back after successful testing.

Or so I thought.  ON archives routinely go for pre-integration regression testing (before being rolled up to the whole wad-of-stuff that makes a full Solaris build) and they soon turned up a test failure that had systems running DEBUG archives failing the new assertion check in thread_allowmigrate during cpr (checkpoint-resume - power management) validation tests.

Now cpr testing is on the putback checklist but I'd skipped it in the bug fix on the grounds that I couldn't possibly have affected it.  Well that was true - the newly uncovered bug was actually introduced back in the initial putback of build 63 (about 14 weeks earlier) but was now exposed by my extended assertion.

Remember that the initial consumer of the thread_nomigrate interface was to be some modified kernel hardware-accelerated copy code - bcopy in particular.  Well it turns out that cpr uses bcopy when it writes the pages of the system to disk for later restore, taking special care of some pages which may change during the checkpoint operation itself.  However it did not take any special care with regard to the kthread_t structure of the thread performing the cpr operation, and when bcopy called thread_nomigrate the thread structure for the running thread would record the current cpu address in t_weakbound_cpu and the nesting count in t_nomigrate; if the page being checkpointed/copied happened to be that containing this particular kthread_t then those values were preserved and restored on the resume operation - undoing the stores of thread_allowmigrate for this thread - effectively warping us back in time!

There's certainly a moral there: never assume you understand all the interactions of the various elements of the OS, and do perform all required regression testing no matter how unrelated it seems at the time!  I just required the humiliation of the "fix" being backed out to remind me of this.

Technorati Tag:
Technorati Tag:
Comments:

I have a question. When the thread migrates to the other CPU which is not in its home lgroup. Where are the context of the migrate been saved?

Posted by Zecho on August 26, 2010 at 05:07 AM EST #

Context for a kernel thread in saved in the kthread_t structure in member
t_ctx. If it is running in userland then the userland context is saved
in the klwp_t structure - see lwp_pcb. A look around the _resume_from_idle
code area tells all (after rereading a few times, anyway).

Posted by Gavin Maltby on August 26, 2010 at 08:15 AM EST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

I work in the Fault Management core group; this blog describes some of the work performed in that group.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Site Pages
OpenSolaris
Sun Bloggers

No bookmarks in folder