2007-08-27 22:53:31 +00:00
|
|
|
// The local APIC manages internal (non-I/O) interrupts.
|
|
|
|
// See Chapter 8 & Appendix C of Intel processor manual volume 3.
|
|
|
|
|
2016-09-02 12:31:13 +00:00
|
|
|
#include "param.h"
|
2006-07-12 17:19:24 +00:00
|
|
|
#include "types.h"
|
kernel SMP interruptibility fixes.
Last year, right before I sent xv6 to the printer, I changed the
SETGATE calls so that interrupts would be disabled on entry to
interrupt handlers, and I added the nlock++ / nlock-- in trap()
so that interrupts would stay disabled while the hw handlers
(but not the syscall handler) did their work. I did this because
the kernel was otherwise causing Bochs to triple-fault in SMP
mode, and time was short.
Robert observed yesterday that something was keeping the SMP
preemption user test from working. It turned out that when I
simplified the lapic code I swapped the order of two register
writes that I didn't realize were order dependent. I fixed that
and then since I had everything paged in kept going and tried
to figure out why you can't leave interrupts on during interrupt
handlers. There are a few issues.
First, there must be some way to keep interrupts from "stacking
up" and overflowing the stack. Keeping interrupts off the whole
time solves this problem -- even if the clock tick handler runs
long enough that the next clock tick is waiting when it finishes,
keeping interrupts off means that the handler runs all the way
through the "iret" before the next handler begins. This is not
really a problem unless you are putting too many prints in trap
-- if the OS is doing its job right, the handlers should run
quickly and not stack up.
Second, if xv6 had page faults, then it would be important to
keep interrupts disabled between the start of the interrupt and
the time that cr2 was read, to avoid a scenario like:
p1 page faults [cr2 set to faulting address]
p1 starts executing trapasm.S
clock interrupt, p1 preempted, p2 starts executing
p2 page faults [cr2 set to another faulting address]
p2 starts, finishes fault handler
p1 rescheduled, reads cr2, sees wrong fault address
Alternately p1 could be rescheduled on the other cpu, in which
case it would still see the wrong cr2. That said, I think cr2
is the only interrupt state that isn't pushed onto the interrupt
stack atomically at fault time, and xv6 doesn't care. (This isn't
entirely hypothetical -- I debugged this problem on Plan 9.)
Third, and this is the big one, it is not safe to call cpu()
unless interrupts are disabled. If interrupts are enabled then
there is no guarantee that, between the time cpu() looks up the
cpu id and the time that it the result gets used, the process
has not been rescheduled to the other cpu. For example, the
very commonly-used expression curproc[cpu()] (aka the macro cp)
can end up referring to the wrong proc: the code stores the
result of cpu() in %eax, gets rescheduled to the other cpu at
just the wrong instant, and then reads curproc[%eax].
We use curproc[cpu()] to get the current process a LOT. In that
particular case, if we arranged for the current curproc entry
to be addressed by %fs:0 and just use a different %fs on each
CPU, then we could safely get at curproc even with interrupts
disabled, since the read of %fs would be atomic with the read
of %fs:0. Alternately, we could have a curproc() function that
disables interrupts while computing curproc[cpu()]. I've done
that last one.
Even in the current kernel, with interrupts off on entry to trap,
interrupts are enabled inside release if there are no locks held.
Also, the scheduler's idle loop must be interruptible at times
so that the clock and disk interrupts (which might make processes
runnable) can be handled.
In addition to the rampant use of curproc[cpu()], this little
snippet from acquire is wrong on smp:
if(cpus[cpu()].nlock == 0)
cli();
cpus[cpu()].nlock++;
because if interrupts are off then we might call cpu(), get
rescheduled to a different cpu, look at cpus[oldcpu].nlock, and
wrongly decide not to disable interrupts on the new cpu. The
fix is to always call cli(). But this is wrong too:
if(holding(lock))
panic("acquire");
cli();
cpus[cpu()].nlock++;
because holding looks at cpu(). The fix is:
cli();
if(holding(lock))
panic("acquire");
cpus[cpu()].nlock++;
I've done that, and I changed cpu() to complain the first time
it gets called with interrupts disabled. (It gets called too
much to complain every time.)
I added new functions splhi and spllo that are like acquire and
release but without the locking:
void
splhi(void)
{
cli();
cpus[cpu()].nsplhi++;
}
void
spllo(void)
{
if(--cpus[cpu()].nsplhi == 0)
sti();
}
and I've used those to protect other sections of code that refer
to cpu() when interrupts would otherwise be disabled (basically
just curproc and setupsegs). I also use them in acquire/release
and got rid of nlock.
I'm not thrilled with the names, but I think the concept -- a
counted cli/sti -- is sound. Having them also replaces the
nlock++/nlock-- in trap.c and main.c, which is nice.
Final note: it's still not safe to enable interrupts in
the middle of trap() between lapic_eoi and returning
to user space. I don't understand why, but we get a
fault on pop %es because 0x10 is a bad segment
descriptor (!) and then the fault faults trying to go into
a new interrupt because 0x8 is a bad segment descriptor too!
Triple fault. I haven't debugged this yet.
2007-09-27 12:58:42 +00:00
|
|
|
#include "defs.h"
|
2014-09-12 21:18:57 +00:00
|
|
|
#include "date.h"
|
2011-08-10 03:22:48 +00:00
|
|
|
#include "memlayout.h"
|
2006-07-12 17:19:24 +00:00
|
|
|
#include "traps.h"
|
kernel SMP interruptibility fixes.
Last year, right before I sent xv6 to the printer, I changed the
SETGATE calls so that interrupts would be disabled on entry to
interrupt handlers, and I added the nlock++ / nlock-- in trap()
so that interrupts would stay disabled while the hw handlers
(but not the syscall handler) did their work. I did this because
the kernel was otherwise causing Bochs to triple-fault in SMP
mode, and time was short.
Robert observed yesterday that something was keeping the SMP
preemption user test from working. It turned out that when I
simplified the lapic code I swapped the order of two register
writes that I didn't realize were order dependent. I fixed that
and then since I had everything paged in kept going and tried
to figure out why you can't leave interrupts on during interrupt
handlers. There are a few issues.
First, there must be some way to keep interrupts from "stacking
up" and overflowing the stack. Keeping interrupts off the whole
time solves this problem -- even if the clock tick handler runs
long enough that the next clock tick is waiting when it finishes,
keeping interrupts off means that the handler runs all the way
through the "iret" before the next handler begins. This is not
really a problem unless you are putting too many prints in trap
-- if the OS is doing its job right, the handlers should run
quickly and not stack up.
Second, if xv6 had page faults, then it would be important to
keep interrupts disabled between the start of the interrupt and
the time that cr2 was read, to avoid a scenario like:
p1 page faults [cr2 set to faulting address]
p1 starts executing trapasm.S
clock interrupt, p1 preempted, p2 starts executing
p2 page faults [cr2 set to another faulting address]
p2 starts, finishes fault handler
p1 rescheduled, reads cr2, sees wrong fault address
Alternately p1 could be rescheduled on the other cpu, in which
case it would still see the wrong cr2. That said, I think cr2
is the only interrupt state that isn't pushed onto the interrupt
stack atomically at fault time, and xv6 doesn't care. (This isn't
entirely hypothetical -- I debugged this problem on Plan 9.)
Third, and this is the big one, it is not safe to call cpu()
unless interrupts are disabled. If interrupts are enabled then
there is no guarantee that, between the time cpu() looks up the
cpu id and the time that it the result gets used, the process
has not been rescheduled to the other cpu. For example, the
very commonly-used expression curproc[cpu()] (aka the macro cp)
can end up referring to the wrong proc: the code stores the
result of cpu() in %eax, gets rescheduled to the other cpu at
just the wrong instant, and then reads curproc[%eax].
We use curproc[cpu()] to get the current process a LOT. In that
particular case, if we arranged for the current curproc entry
to be addressed by %fs:0 and just use a different %fs on each
CPU, then we could safely get at curproc even with interrupts
disabled, since the read of %fs would be atomic with the read
of %fs:0. Alternately, we could have a curproc() function that
disables interrupts while computing curproc[cpu()]. I've done
that last one.
Even in the current kernel, with interrupts off on entry to trap,
interrupts are enabled inside release if there are no locks held.
Also, the scheduler's idle loop must be interruptible at times
so that the clock and disk interrupts (which might make processes
runnable) can be handled.
In addition to the rampant use of curproc[cpu()], this little
snippet from acquire is wrong on smp:
if(cpus[cpu()].nlock == 0)
cli();
cpus[cpu()].nlock++;
because if interrupts are off then we might call cpu(), get
rescheduled to a different cpu, look at cpus[oldcpu].nlock, and
wrongly decide not to disable interrupts on the new cpu. The
fix is to always call cli(). But this is wrong too:
if(holding(lock))
panic("acquire");
cli();
cpus[cpu()].nlock++;
because holding looks at cpu(). The fix is:
cli();
if(holding(lock))
panic("acquire");
cpus[cpu()].nlock++;
I've done that, and I changed cpu() to complain the first time
it gets called with interrupts disabled. (It gets called too
much to complain every time.)
I added new functions splhi and spllo that are like acquire and
release but without the locking:
void
splhi(void)
{
cli();
cpus[cpu()].nsplhi++;
}
void
spllo(void)
{
if(--cpus[cpu()].nsplhi == 0)
sti();
}
and I've used those to protect other sections of code that refer
to cpu() when interrupts would otherwise be disabled (basically
just curproc and setupsegs). I also use them in acquire/release
and got rid of nlock.
I'm not thrilled with the names, but I think the concept -- a
counted cli/sti -- is sound. Having them also replaces the
nlock++/nlock-- in trap.c and main.c, which is nice.
Final note: it's still not safe to enable interrupts in
the middle of trap() between lapic_eoi and returning
to user space. I don't understand why, but we get a
fault on pop %es because 0x10 is a bad segment
descriptor (!) and then the fault faults trying to go into
a new interrupt because 0x8 is a bad segment descriptor too!
Triple fault. I haven't debugged this yet.
2007-09-27 12:58:42 +00:00
|
|
|
#include "mmu.h"
|
|
|
|
#include "x86.h"
|
2016-09-02 12:31:13 +00:00
|
|
|
#include "proc.h" // ncpu
|
2007-08-27 16:57:13 +00:00
|
|
|
|
|
|
|
// Local APIC registers, divided by 4 for use as uint[] indices.
|
|
|
|
#define ID (0x0020/4) // ID
|
|
|
|
#define VER (0x0030/4) // Version
|
|
|
|
#define TPR (0x0080/4) // Task Priority
|
|
|
|
#define EOI (0x00B0/4) // EOI
|
|
|
|
#define SVR (0x00F0/4) // Spurious Interrupt Vector
|
2007-08-27 22:53:31 +00:00
|
|
|
#define ENABLE 0x00000100 // Unit Enable
|
2007-08-27 16:57:13 +00:00
|
|
|
#define ESR (0x0280/4) // Error Status
|
|
|
|
#define ICRLO (0x0300/4) // Interrupt Command
|
2007-08-27 22:53:31 +00:00
|
|
|
#define INIT 0x00000500 // INIT/RESET
|
|
|
|
#define STARTUP 0x00000600 // Startup IPI
|
|
|
|
#define DELIVS 0x00001000 // Delivery status
|
|
|
|
#define ASSERT 0x00004000 // Assert interrupt (vs deassert)
|
2010-07-23 11:41:13 +00:00
|
|
|
#define DEASSERT 0x00000000
|
2007-08-27 22:53:31 +00:00
|
|
|
#define LEVEL 0x00008000 // Level triggered
|
|
|
|
#define BCAST 0x00080000 // Send to all APICs, including self.
|
2016-08-19 11:20:08 +00:00
|
|
|
#define BUSY 0x00001000
|
2010-07-23 11:41:13 +00:00
|
|
|
#define FIXED 0x00000000
|
2007-08-27 16:57:13 +00:00
|
|
|
#define ICRHI (0x0310/4) // Interrupt Command [63:32]
|
|
|
|
#define TIMER (0x0320/4) // Local Vector Table 0 (TIMER)
|
2007-08-27 22:53:31 +00:00
|
|
|
#define X1 0x0000000B // divide counts by 1
|
|
|
|
#define PERIODIC 0x00020000 // Periodic
|
2007-08-27 16:57:13 +00:00
|
|
|
#define PCINT (0x0340/4) // Performance Counter LVT
|
|
|
|
#define LINT0 (0x0350/4) // Local Vector Table 1 (LINT0)
|
|
|
|
#define LINT1 (0x0360/4) // Local Vector Table 2 (LINT1)
|
|
|
|
#define ERROR (0x0370/4) // Local Vector Table 3 (ERROR)
|
2007-08-27 22:53:31 +00:00
|
|
|
#define MASKED 0x00010000 // Interrupt masked
|
2007-08-27 16:57:13 +00:00
|
|
|
#define TICR (0x0380/4) // Timer Initial Count
|
|
|
|
#define TCCR (0x0390/4) // Timer Current Count
|
|
|
|
#define TDCR (0x03E0/4) // Timer Divide Configuration
|
|
|
|
|
|
|
|
volatile uint *lapic; // Initialized in mp.c
|
2006-07-12 17:19:24 +00:00
|
|
|
|
2007-09-27 19:33:46 +00:00
|
|
|
static void
|
|
|
|
lapicw(int index, int value)
|
|
|
|
{
|
|
|
|
lapic[index] = value;
|
|
|
|
lapic[ID]; // wait for write to finish, by reading
|
|
|
|
}
|
2007-08-27 22:53:31 +00:00
|
|
|
//PAGEBREAK!
|
2011-09-02 19:29:33 +00:00
|
|
|
|
2006-07-12 17:19:24 +00:00
|
|
|
void
|
2012-08-23 00:13:43 +00:00
|
|
|
lapicinit(void)
|
2006-07-12 17:19:24 +00:00
|
|
|
{
|
2016-08-19 11:20:08 +00:00
|
|
|
if(!lapic)
|
2006-09-07 01:37:58 +00:00
|
|
|
return;
|
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Enable local APIC; set spurious interrupt vector.
|
2009-07-12 02:24:56 +00:00
|
|
|
lapicw(SVR, ENABLE | (T_IRQ0 + IRQ_SPURIOUS));
|
2006-07-12 17:19:24 +00:00
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// The timer repeatedly counts down at bus frequency
|
2016-08-19 11:20:08 +00:00
|
|
|
// from lapic[TICR] and then issues an interrupt.
|
2007-09-26 20:34:12 +00:00
|
|
|
// If xv6 cared more about precise timekeeping,
|
|
|
|
// TICR would be calibrated using an external time source.
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(TDCR, X1);
|
2009-07-12 02:24:56 +00:00
|
|
|
lapicw(TIMER, PERIODIC | (T_IRQ0 + IRQ_TIMER));
|
2016-08-19 11:20:08 +00:00
|
|
|
lapicw(TICR, 10000000);
|
2006-07-12 17:19:24 +00:00
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Disable logical interrupt lines.
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(LINT0, MASKED);
|
|
|
|
lapicw(LINT1, MASKED);
|
2006-07-12 17:19:24 +00:00
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Disable performance counter overflow interrupts
|
|
|
|
// on machines that provide that interrupt entry.
|
|
|
|
if(((lapic[VER]>>16) & 0xFF) >= 4)
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(PCINT, MASKED);
|
2007-08-27 22:53:31 +00:00
|
|
|
|
|
|
|
// Map error interrupt to IRQ_ERROR.
|
2009-07-12 02:24:56 +00:00
|
|
|
lapicw(ERROR, T_IRQ0 + IRQ_ERROR);
|
2007-08-27 22:53:31 +00:00
|
|
|
|
|
|
|
// Clear error status register (requires back-to-back writes).
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(ESR, 0);
|
|
|
|
lapicw(ESR, 0);
|
2007-08-27 22:53:31 +00:00
|
|
|
|
|
|
|
// Ack any outstanding interrupts.
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(EOI, 0);
|
2006-07-12 17:19:24 +00:00
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Send an Init Level De-Assert to synchronise arbitration ID's.
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(ICRHI, 0);
|
|
|
|
lapicw(ICRLO, BCAST | INIT | LEVEL);
|
2007-08-27 22:53:31 +00:00
|
|
|
while(lapic[ICRLO] & DELIVS)
|
2006-07-12 17:19:24 +00:00
|
|
|
;
|
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Enable interrupts on the APIC (but not on the processor).
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(TPR, 0);
|
2006-07-12 17:19:24 +00:00
|
|
|
}
|
|
|
|
|
2007-08-27 16:57:13 +00:00
|
|
|
int
|
2009-08-31 06:02:08 +00:00
|
|
|
cpunum(void)
|
2006-07-12 17:19:24 +00:00
|
|
|
{
|
2016-09-02 12:31:13 +00:00
|
|
|
int apicid, i;
|
|
|
|
|
kernel SMP interruptibility fixes.
Last year, right before I sent xv6 to the printer, I changed the
SETGATE calls so that interrupts would be disabled on entry to
interrupt handlers, and I added the nlock++ / nlock-- in trap()
so that interrupts would stay disabled while the hw handlers
(but not the syscall handler) did their work. I did this because
the kernel was otherwise causing Bochs to triple-fault in SMP
mode, and time was short.
Robert observed yesterday that something was keeping the SMP
preemption user test from working. It turned out that when I
simplified the lapic code I swapped the order of two register
writes that I didn't realize were order dependent. I fixed that
and then since I had everything paged in kept going and tried
to figure out why you can't leave interrupts on during interrupt
handlers. There are a few issues.
First, there must be some way to keep interrupts from "stacking
up" and overflowing the stack. Keeping interrupts off the whole
time solves this problem -- even if the clock tick handler runs
long enough that the next clock tick is waiting when it finishes,
keeping interrupts off means that the handler runs all the way
through the "iret" before the next handler begins. This is not
really a problem unless you are putting too many prints in trap
-- if the OS is doing its job right, the handlers should run
quickly and not stack up.
Second, if xv6 had page faults, then it would be important to
keep interrupts disabled between the start of the interrupt and
the time that cr2 was read, to avoid a scenario like:
p1 page faults [cr2 set to faulting address]
p1 starts executing trapasm.S
clock interrupt, p1 preempted, p2 starts executing
p2 page faults [cr2 set to another faulting address]
p2 starts, finishes fault handler
p1 rescheduled, reads cr2, sees wrong fault address
Alternately p1 could be rescheduled on the other cpu, in which
case it would still see the wrong cr2. That said, I think cr2
is the only interrupt state that isn't pushed onto the interrupt
stack atomically at fault time, and xv6 doesn't care. (This isn't
entirely hypothetical -- I debugged this problem on Plan 9.)
Third, and this is the big one, it is not safe to call cpu()
unless interrupts are disabled. If interrupts are enabled then
there is no guarantee that, between the time cpu() looks up the
cpu id and the time that it the result gets used, the process
has not been rescheduled to the other cpu. For example, the
very commonly-used expression curproc[cpu()] (aka the macro cp)
can end up referring to the wrong proc: the code stores the
result of cpu() in %eax, gets rescheduled to the other cpu at
just the wrong instant, and then reads curproc[%eax].
We use curproc[cpu()] to get the current process a LOT. In that
particular case, if we arranged for the current curproc entry
to be addressed by %fs:0 and just use a different %fs on each
CPU, then we could safely get at curproc even with interrupts
disabled, since the read of %fs would be atomic with the read
of %fs:0. Alternately, we could have a curproc() function that
disables interrupts while computing curproc[cpu()]. I've done
that last one.
Even in the current kernel, with interrupts off on entry to trap,
interrupts are enabled inside release if there are no locks held.
Also, the scheduler's idle loop must be interruptible at times
so that the clock and disk interrupts (which might make processes
runnable) can be handled.
In addition to the rampant use of curproc[cpu()], this little
snippet from acquire is wrong on smp:
if(cpus[cpu()].nlock == 0)
cli();
cpus[cpu()].nlock++;
because if interrupts are off then we might call cpu(), get
rescheduled to a different cpu, look at cpus[oldcpu].nlock, and
wrongly decide not to disable interrupts on the new cpu. The
fix is to always call cli(). But this is wrong too:
if(holding(lock))
panic("acquire");
cli();
cpus[cpu()].nlock++;
because holding looks at cpu(). The fix is:
cli();
if(holding(lock))
panic("acquire");
cpus[cpu()].nlock++;
I've done that, and I changed cpu() to complain the first time
it gets called with interrupts disabled. (It gets called too
much to complain every time.)
I added new functions splhi and spllo that are like acquire and
release but without the locking:
void
splhi(void)
{
cli();
cpus[cpu()].nsplhi++;
}
void
spllo(void)
{
if(--cpus[cpu()].nsplhi == 0)
sti();
}
and I've used those to protect other sections of code that refer
to cpu() when interrupts would otherwise be disabled (basically
just curproc and setupsegs). I also use them in acquire/release
and got rid of nlock.
I'm not thrilled with the names, but I think the concept -- a
counted cli/sti -- is sound. Having them also replaces the
nlock++/nlock-- in trap.c and main.c, which is nice.
Final note: it's still not safe to enable interrupts in
the middle of trap() between lapic_eoi and returning
to user space. I don't understand why, but we get a
fault on pop %es because 0x10 is a bad segment
descriptor (!) and then the fault faults trying to go into
a new interrupt because 0x8 is a bad segment descriptor too!
Triple fault. I haven't debugged this yet.
2007-09-27 12:58:42 +00:00
|
|
|
// Cannot call cpu when interrupts are enabled:
|
|
|
|
// result not guaranteed to last long enough to be used!
|
|
|
|
// Would prefer to panic but even printing is chancy here:
|
2008-10-12 20:19:16 +00:00
|
|
|
// almost everything, including cprintf and panic, calls cpu,
|
|
|
|
// often indirectly through acquire and release.
|
2009-03-08 22:07:13 +00:00
|
|
|
if(readeflags()&FL_IF){
|
kernel SMP interruptibility fixes.
Last year, right before I sent xv6 to the printer, I changed the
SETGATE calls so that interrupts would be disabled on entry to
interrupt handlers, and I added the nlock++ / nlock-- in trap()
so that interrupts would stay disabled while the hw handlers
(but not the syscall handler) did their work. I did this because
the kernel was otherwise causing Bochs to triple-fault in SMP
mode, and time was short.
Robert observed yesterday that something was keeping the SMP
preemption user test from working. It turned out that when I
simplified the lapic code I swapped the order of two register
writes that I didn't realize were order dependent. I fixed that
and then since I had everything paged in kept going and tried
to figure out why you can't leave interrupts on during interrupt
handlers. There are a few issues.
First, there must be some way to keep interrupts from "stacking
up" and overflowing the stack. Keeping interrupts off the whole
time solves this problem -- even if the clock tick handler runs
long enough that the next clock tick is waiting when it finishes,
keeping interrupts off means that the handler runs all the way
through the "iret" before the next handler begins. This is not
really a problem unless you are putting too many prints in trap
-- if the OS is doing its job right, the handlers should run
quickly and not stack up.
Second, if xv6 had page faults, then it would be important to
keep interrupts disabled between the start of the interrupt and
the time that cr2 was read, to avoid a scenario like:
p1 page faults [cr2 set to faulting address]
p1 starts executing trapasm.S
clock interrupt, p1 preempted, p2 starts executing
p2 page faults [cr2 set to another faulting address]
p2 starts, finishes fault handler
p1 rescheduled, reads cr2, sees wrong fault address
Alternately p1 could be rescheduled on the other cpu, in which
case it would still see the wrong cr2. That said, I think cr2
is the only interrupt state that isn't pushed onto the interrupt
stack atomically at fault time, and xv6 doesn't care. (This isn't
entirely hypothetical -- I debugged this problem on Plan 9.)
Third, and this is the big one, it is not safe to call cpu()
unless interrupts are disabled. If interrupts are enabled then
there is no guarantee that, between the time cpu() looks up the
cpu id and the time that it the result gets used, the process
has not been rescheduled to the other cpu. For example, the
very commonly-used expression curproc[cpu()] (aka the macro cp)
can end up referring to the wrong proc: the code stores the
result of cpu() in %eax, gets rescheduled to the other cpu at
just the wrong instant, and then reads curproc[%eax].
We use curproc[cpu()] to get the current process a LOT. In that
particular case, if we arranged for the current curproc entry
to be addressed by %fs:0 and just use a different %fs on each
CPU, then we could safely get at curproc even with interrupts
disabled, since the read of %fs would be atomic with the read
of %fs:0. Alternately, we could have a curproc() function that
disables interrupts while computing curproc[cpu()]. I've done
that last one.
Even in the current kernel, with interrupts off on entry to trap,
interrupts are enabled inside release if there are no locks held.
Also, the scheduler's idle loop must be interruptible at times
so that the clock and disk interrupts (which might make processes
runnable) can be handled.
In addition to the rampant use of curproc[cpu()], this little
snippet from acquire is wrong on smp:
if(cpus[cpu()].nlock == 0)
cli();
cpus[cpu()].nlock++;
because if interrupts are off then we might call cpu(), get
rescheduled to a different cpu, look at cpus[oldcpu].nlock, and
wrongly decide not to disable interrupts on the new cpu. The
fix is to always call cli(). But this is wrong too:
if(holding(lock))
panic("acquire");
cli();
cpus[cpu()].nlock++;
because holding looks at cpu(). The fix is:
cli();
if(holding(lock))
panic("acquire");
cpus[cpu()].nlock++;
I've done that, and I changed cpu() to complain the first time
it gets called with interrupts disabled. (It gets called too
much to complain every time.)
I added new functions splhi and spllo that are like acquire and
release but without the locking:
void
splhi(void)
{
cli();
cpus[cpu()].nsplhi++;
}
void
spllo(void)
{
if(--cpus[cpu()].nsplhi == 0)
sti();
}
and I've used those to protect other sections of code that refer
to cpu() when interrupts would otherwise be disabled (basically
just curproc and setupsegs). I also use them in acquire/release
and got rid of nlock.
I'm not thrilled with the names, but I think the concept -- a
counted cli/sti -- is sound. Having them also replaces the
nlock++/nlock-- in trap.c and main.c, which is nice.
Final note: it's still not safe to enable interrupts in
the middle of trap() between lapic_eoi and returning
to user space. I don't understand why, but we get a
fault on pop %es because 0x10 is a bad segment
descriptor (!) and then the fault faults trying to go into
a new interrupt because 0x8 is a bad segment descriptor too!
Triple fault. I haven't debugged this yet.
2007-09-27 12:58:42 +00:00
|
|
|
static int n;
|
2007-09-27 21:02:03 +00:00
|
|
|
if(n++ == 0)
|
2007-11-28 20:47:10 +00:00
|
|
|
cprintf("cpu called from %x with interrupts enabled\n",
|
2009-03-08 22:07:13 +00:00
|
|
|
__builtin_return_address(0));
|
kernel SMP interruptibility fixes.
Last year, right before I sent xv6 to the printer, I changed the
SETGATE calls so that interrupts would be disabled on entry to
interrupt handlers, and I added the nlock++ / nlock-- in trap()
so that interrupts would stay disabled while the hw handlers
(but not the syscall handler) did their work. I did this because
the kernel was otherwise causing Bochs to triple-fault in SMP
mode, and time was short.
Robert observed yesterday that something was keeping the SMP
preemption user test from working. It turned out that when I
simplified the lapic code I swapped the order of two register
writes that I didn't realize were order dependent. I fixed that
and then since I had everything paged in kept going and tried
to figure out why you can't leave interrupts on during interrupt
handlers. There are a few issues.
First, there must be some way to keep interrupts from "stacking
up" and overflowing the stack. Keeping interrupts off the whole
time solves this problem -- even if the clock tick handler runs
long enough that the next clock tick is waiting when it finishes,
keeping interrupts off means that the handler runs all the way
through the "iret" before the next handler begins. This is not
really a problem unless you are putting too many prints in trap
-- if the OS is doing its job right, the handlers should run
quickly and not stack up.
Second, if xv6 had page faults, then it would be important to
keep interrupts disabled between the start of the interrupt and
the time that cr2 was read, to avoid a scenario like:
p1 page faults [cr2 set to faulting address]
p1 starts executing trapasm.S
clock interrupt, p1 preempted, p2 starts executing
p2 page faults [cr2 set to another faulting address]
p2 starts, finishes fault handler
p1 rescheduled, reads cr2, sees wrong fault address
Alternately p1 could be rescheduled on the other cpu, in which
case it would still see the wrong cr2. That said, I think cr2
is the only interrupt state that isn't pushed onto the interrupt
stack atomically at fault time, and xv6 doesn't care. (This isn't
entirely hypothetical -- I debugged this problem on Plan 9.)
Third, and this is the big one, it is not safe to call cpu()
unless interrupts are disabled. If interrupts are enabled then
there is no guarantee that, between the time cpu() looks up the
cpu id and the time that it the result gets used, the process
has not been rescheduled to the other cpu. For example, the
very commonly-used expression curproc[cpu()] (aka the macro cp)
can end up referring to the wrong proc: the code stores the
result of cpu() in %eax, gets rescheduled to the other cpu at
just the wrong instant, and then reads curproc[%eax].
We use curproc[cpu()] to get the current process a LOT. In that
particular case, if we arranged for the current curproc entry
to be addressed by %fs:0 and just use a different %fs on each
CPU, then we could safely get at curproc even with interrupts
disabled, since the read of %fs would be atomic with the read
of %fs:0. Alternately, we could have a curproc() function that
disables interrupts while computing curproc[cpu()]. I've done
that last one.
Even in the current kernel, with interrupts off on entry to trap,
interrupts are enabled inside release if there are no locks held.
Also, the scheduler's idle loop must be interruptible at times
so that the clock and disk interrupts (which might make processes
runnable) can be handled.
In addition to the rampant use of curproc[cpu()], this little
snippet from acquire is wrong on smp:
if(cpus[cpu()].nlock == 0)
cli();
cpus[cpu()].nlock++;
because if interrupts are off then we might call cpu(), get
rescheduled to a different cpu, look at cpus[oldcpu].nlock, and
wrongly decide not to disable interrupts on the new cpu. The
fix is to always call cli(). But this is wrong too:
if(holding(lock))
panic("acquire");
cli();
cpus[cpu()].nlock++;
because holding looks at cpu(). The fix is:
cli();
if(holding(lock))
panic("acquire");
cpus[cpu()].nlock++;
I've done that, and I changed cpu() to complain the first time
it gets called with interrupts disabled. (It gets called too
much to complain every time.)
I added new functions splhi and spllo that are like acquire and
release but without the locking:
void
splhi(void)
{
cli();
cpus[cpu()].nsplhi++;
}
void
spllo(void)
{
if(--cpus[cpu()].nsplhi == 0)
sti();
}
and I've used those to protect other sections of code that refer
to cpu() when interrupts would otherwise be disabled (basically
just curproc and setupsegs). I also use them in acquire/release
and got rid of nlock.
I'm not thrilled with the names, but I think the concept -- a
counted cli/sti -- is sound. Having them also replaces the
nlock++/nlock-- in trap.c and main.c, which is nice.
Final note: it's still not safe to enable interrupts in
the middle of trap() between lapic_eoi and returning
to user space. I don't understand why, but we get a
fault on pop %es because 0x10 is a bad segment
descriptor (!) and then the fault faults trying to go into
a new interrupt because 0x8 is a bad segment descriptor too!
Triple fault. I haven't debugged this yet.
2007-09-27 12:58:42 +00:00
|
|
|
}
|
|
|
|
|
2016-09-02 12:31:13 +00:00
|
|
|
if (!lapic)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
apicid = lapic[ID] >> 24;
|
|
|
|
for (i = 0; i < ncpu; ++i) {
|
|
|
|
if (cpus[i].apicid == apicid)
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
panic("unknown apicid\n");
|
2006-07-12 17:19:24 +00:00
|
|
|
}
|
|
|
|
|
2007-08-27 16:57:13 +00:00
|
|
|
// Acknowledge interrupt.
|
2006-08-04 18:12:31 +00:00
|
|
|
void
|
2009-03-08 22:07:13 +00:00
|
|
|
lapiceoi(void)
|
2006-08-04 18:12:31 +00:00
|
|
|
{
|
2007-08-27 16:57:13 +00:00
|
|
|
if(lapic)
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(EOI, 0);
|
2006-07-12 17:19:24 +00:00
|
|
|
}
|
|
|
|
|
2007-08-27 22:53:31 +00:00
|
|
|
// Spin for a given number of microseconds.
|
|
|
|
// On real hardware would want to tune this dynamically.
|
2009-05-31 00:28:45 +00:00
|
|
|
void
|
2007-08-27 22:53:31 +00:00
|
|
|
microdelay(int us)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2014-09-12 21:18:57 +00:00
|
|
|
#define CMOS_PORT 0x70
|
|
|
|
#define CMOS_RETURN 0x71
|
2007-11-28 20:47:10 +00:00
|
|
|
|
2011-08-16 00:11:13 +00:00
|
|
|
// Start additional processor running entry code at addr.
|
2007-08-27 22:53:31 +00:00
|
|
|
// See Appendix B of MultiProcessor Specification.
|
2006-07-12 17:19:24 +00:00
|
|
|
void
|
2009-03-08 22:07:13 +00:00
|
|
|
lapicstartap(uchar apicid, uint addr)
|
2006-07-12 17:19:24 +00:00
|
|
|
{
|
2007-08-27 16:57:13 +00:00
|
|
|
int i;
|
2007-11-28 20:47:10 +00:00
|
|
|
ushort *wrv;
|
2016-08-19 11:20:08 +00:00
|
|
|
|
2007-11-28 20:47:10 +00:00
|
|
|
// "The BSP must initialize CMOS shutdown code to 0AH
|
|
|
|
// and the warm reset vector (DWORD based at 40:67) to point at
|
|
|
|
// the AP startup code prior to the [universal startup algorithm]."
|
2014-09-12 21:18:57 +00:00
|
|
|
outb(CMOS_PORT, 0xF); // offset 0xF is shutdown code
|
|
|
|
outb(CMOS_PORT+1, 0x0A);
|
2011-08-10 03:22:48 +00:00
|
|
|
wrv = (ushort*)P2V((0x40<<4 | 0x67)); // Warm reset vector
|
2007-11-28 20:47:10 +00:00
|
|
|
wrv[0] = 0;
|
|
|
|
wrv[1] = addr >> 4;
|
|
|
|
|
|
|
|
// "Universal startup algorithm."
|
|
|
|
// Send INIT (level-triggered) interrupt to reset other CPU.
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(ICRHI, apicid<<24);
|
2007-11-28 20:47:10 +00:00
|
|
|
lapicw(ICRLO, INIT | LEVEL | ASSERT);
|
|
|
|
microdelay(200);
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(ICRLO, INIT | LEVEL);
|
2009-05-31 00:39:17 +00:00
|
|
|
microdelay(100); // should be 10ms, but too slow in Bochs!
|
2016-08-19 11:20:08 +00:00
|
|
|
|
2011-08-16 00:11:13 +00:00
|
|
|
// Send startup IPI (twice!) to enter code.
|
2007-11-28 20:47:10 +00:00
|
|
|
// Regular hardware is supposed to only accept a STARTUP
|
|
|
|
// when it is in the halted state due to an INIT. So the second
|
|
|
|
// should be ignored, but it is part of the official Intel algorithm.
|
|
|
|
// Bochs complains about the second one. Too bad for Bochs.
|
2007-08-27 16:57:13 +00:00
|
|
|
for(i = 0; i < 2; i++){
|
2007-09-27 19:33:46 +00:00
|
|
|
lapicw(ICRHI, apicid<<24);
|
|
|
|
lapicw(ICRLO, STARTUP | (addr>>12));
|
2007-11-28 20:47:10 +00:00
|
|
|
microdelay(200);
|
2006-07-12 17:19:24 +00:00
|
|
|
}
|
|
|
|
}
|
2011-08-16 19:47:22 +00:00
|
|
|
|
2014-09-12 21:18:57 +00:00
|
|
|
#define CMOS_STATA 0x0a
|
|
|
|
#define CMOS_STATB 0x0b
|
|
|
|
#define CMOS_UIP (1 << 7) // RTC update in progress
|
2011-08-16 19:47:22 +00:00
|
|
|
|
2014-09-12 21:18:57 +00:00
|
|
|
#define SECS 0x00
|
|
|
|
#define MINS 0x02
|
|
|
|
#define HOURS 0x04
|
|
|
|
#define DAY 0x07
|
|
|
|
#define MONTH 0x08
|
|
|
|
#define YEAR 0x09
|
|
|
|
|
|
|
|
static uint cmos_read(uint reg)
|
|
|
|
{
|
|
|
|
outb(CMOS_PORT, reg);
|
|
|
|
microdelay(200);
|
|
|
|
|
|
|
|
return inb(CMOS_RETURN);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void fill_rtcdate(struct rtcdate *r)
|
|
|
|
{
|
|
|
|
r->second = cmos_read(SECS);
|
|
|
|
r->minute = cmos_read(MINS);
|
|
|
|
r->hour = cmos_read(HOURS);
|
|
|
|
r->day = cmos_read(DAY);
|
|
|
|
r->month = cmos_read(MONTH);
|
|
|
|
r->year = cmos_read(YEAR);
|
|
|
|
}
|
|
|
|
|
|
|
|
// qemu seems to use 24-hour GWT and the values are BCD encoded
|
|
|
|
void cmostime(struct rtcdate *r)
|
|
|
|
{
|
|
|
|
struct rtcdate t1, t2;
|
|
|
|
int sb, bcd;
|
|
|
|
|
|
|
|
sb = cmos_read(CMOS_STATB);
|
|
|
|
|
|
|
|
bcd = (sb & (1 << 2)) == 0;
|
|
|
|
|
|
|
|
// make sure CMOS doesn't modify time while we read it
|
2016-08-19 01:02:05 +00:00
|
|
|
for(;;) {
|
2014-09-12 21:18:57 +00:00
|
|
|
fill_rtcdate(&t1);
|
2016-08-19 01:02:05 +00:00
|
|
|
if(cmos_read(CMOS_STATA) & CMOS_UIP)
|
2014-09-12 21:18:57 +00:00
|
|
|
continue;
|
|
|
|
fill_rtcdate(&t2);
|
2016-08-19 01:02:05 +00:00
|
|
|
if(memcmp(&t1, &t2, sizeof(t1)) == 0)
|
2014-09-12 21:18:57 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
// convert
|
2016-08-19 01:02:05 +00:00
|
|
|
if(bcd) {
|
2014-09-12 21:18:57 +00:00
|
|
|
#define CONV(x) (t1.x = ((t1.x >> 4) * 10) + (t1.x & 0xf))
|
|
|
|
CONV(second);
|
|
|
|
CONV(minute);
|
|
|
|
CONV(hour );
|
|
|
|
CONV(day );
|
|
|
|
CONV(month );
|
|
|
|
CONV(year );
|
|
|
|
#undef CONV
|
|
|
|
}
|
|
|
|
|
|
|
|
*r = t1;
|
|
|
|
r->year += 2000;
|
|
|
|
}
|