2019-07-25 13:42:36 +00:00
|
|
|
<html>
|
|
|
|
<head>
|
2019-08-02 12:52:36 +00:00
|
|
|
<title>Lab: Alarm and uthread</title>
|
2019-07-25 13:42:36 +00:00
|
|
|
<link rel="stylesheet" href="homework.css" type="text/css" />
|
|
|
|
</head>
|
|
|
|
<body>
|
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<h1>Lab: Alarm and uthread</h1>
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
This lab will familiarize you with the implementation of system calls
|
2019-08-02 12:52:36 +00:00
|
|
|
and switching between threads of execution. In particular, you will
|
|
|
|
implement new system calls (<tt>sigalarm</tt> and <tt>sigreturn</tt>)
|
2019-08-03 11:12:00 +00:00
|
|
|
and switching between threads in a user-level thread package.
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<h2>Warmup: RISC-V assembly</h2>
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<p>For this lab it will be important to understand a bit of RISC-V assembly.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
|
|
|
<p>Add a file user/call.c with the following content, modify the
|
|
|
|
Makefile to add the program to the user programs, and compile (make
|
|
|
|
fs.img). The Makefile also produces a binary and a readable
|
|
|
|
assembly a version of the program in the file user/call.asm.
|
|
|
|
<pre>
|
|
|
|
#include "kernel/param.h"
|
|
|
|
#include "kernel/types.h"
|
|
|
|
#include "kernel/stat.h"
|
|
|
|
#include "user/user.h"
|
|
|
|
|
|
|
|
int g(int x) {
|
|
|
|
return x+3;
|
|
|
|
}
|
|
|
|
|
|
|
|
int f(int x) {
|
|
|
|
return g(x);
|
|
|
|
}
|
|
|
|
|
|
|
|
void main(void) {
|
|
|
|
printf(1, "%d %d\n", f(8)+1, 13);
|
|
|
|
exit();
|
|
|
|
}
|
|
|
|
</pre>
|
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<p>Read through user/call.asm and understand it. The instruction manual
|
2019-07-27 01:03:59 +00:00
|
|
|
for RISC-V is in the doc directory (doc/riscv-spec-v2.2.pdf). Here
|
|
|
|
are some questions that you should answer for yourself:
|
2019-07-26 14:35:21 +00:00
|
|
|
|
|
|
|
<ul>
|
|
|
|
<li>Which registers contain arguments to functions? Which
|
|
|
|
register holds 13 in the call to <tt>printf</tt>? Which register
|
2019-08-02 15:55:26 +00:00
|
|
|
holds the second argument? Which register holds the third one? Etc.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<li>Where is the function call to <tt>f</tt> from main? Where
|
|
|
|
is the call to <tt>g</tt>?
|
|
|
|
(Hint: the compiler may inline functions.)
|
2019-07-26 14:35:21 +00:00
|
|
|
|
|
|
|
<li>At what address is the function <tt>printf</tt> located?
|
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<li>What value is in the register <tt>ra</tt> just after the <tt>jalr</tt>
|
2019-07-26 14:35:21 +00:00
|
|
|
to <tt>printf</tt> in <tt>main</tt>?
|
|
|
|
</ul>
|
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<h2>Warmup: system call tracing</h2>
|
|
|
|
|
|
|
|
<p>In this exercise you will modify the xv6 kernel to print out a line
|
|
|
|
for each system call invocation. It is enough to print the name of the
|
|
|
|
system call and the return value; you don't need to print the system
|
|
|
|
call arguments.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
When you're done, you should see output like this when booting
|
|
|
|
xv6:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
...
|
|
|
|
fork -> 2
|
|
|
|
exec -> 0
|
|
|
|
open -> 3
|
|
|
|
close -> 0
|
|
|
|
$write -> 1
|
|
|
|
write -> 1
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
That's init forking and execing sh, sh making sure only two file descriptors are
|
|
|
|
open, and sh writing the $ prompt. (Note: the output of the shell and the
|
|
|
|
system call trace are intermixed, because the shell uses the write syscall to
|
|
|
|
print its output.)
|
|
|
|
|
|
|
|
<p> Hint: modify the syscall() function in kernel/syscall.c.
|
|
|
|
|
2019-08-02 15:55:26 +00:00
|
|
|
<p>Run the xv6 programs you wrote in earlier labs and inspect the system call
|
|
|
|
trace. Are there many system calls? Which system calls correspond
|
|
|
|
to code in the applications you wrote?
|
2019-08-02 12:52:36 +00:00
|
|
|
|
|
|
|
<p>Optional: print the system call arguments.
|
|
|
|
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<h2>Alarm</h2>
|
2019-07-25 13:42:36 +00:00
|
|
|
|
|
|
|
<p>
|
|
|
|
In this exercise you'll add a feature to xv6 that periodically alerts
|
|
|
|
a process as it uses CPU time. This might be useful for compute-bound
|
|
|
|
processes that want to limit how much CPU time they chew up, or for
|
|
|
|
processes that want to compute but also want to take some periodic
|
|
|
|
action. More generally, you'll be implementing a primitive form of
|
|
|
|
user-level interrupt/fault handlers; you could use something similar
|
|
|
|
to handle page faults in the application, for example.
|
|
|
|
|
|
|
|
<p>
|
2019-07-27 01:03:59 +00:00
|
|
|
You should add a new <tt>sigalarm(interval, handler)</tt> system call.
|
|
|
|
If an application calls <tt>sigalarm(n, fn)</tt>, then after every
|
2019-07-25 13:42:36 +00:00
|
|
|
<tt>n</tt> "ticks" of CPU time that the program consumes, the kernel
|
2019-08-02 15:55:26 +00:00
|
|
|
should cause application function
|
2019-07-25 13:42:36 +00:00
|
|
|
<tt>fn</tt> to be called. When <tt>fn</tt> returns, the application
|
2019-08-02 15:55:26 +00:00
|
|
|
should resume where it left off. A tick is a fairly arbitrary unit of
|
2019-07-25 13:42:36 +00:00
|
|
|
time in xv6, determined by how often a hardware timer generates
|
|
|
|
interrupts.
|
|
|
|
|
|
|
|
<p>
|
2019-08-05 06:04:44 +00:00
|
|
|
You'll find a file <tt>user/alarmtest.c</tt> in your xv6
|
|
|
|
repository. Add it to the Makefile. It won't compile correctly
|
|
|
|
until you've added <tt>sigalarm</tt> and <tt>sigreturn</tt>
|
|
|
|
system calls (see below).
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-08-05 06:04:44 +00:00
|
|
|
<p>
|
|
|
|
<tt>alarmtest</tt> calls <tt>sigalarm(2, periodic)</tt> in <tt>test0</tt> to
|
2019-07-27 01:03:59 +00:00
|
|
|
ask the kernel to force a call to <tt>periodic()</tt> every 2 ticks,
|
2019-08-03 11:12:00 +00:00
|
|
|
and then spins for a while.
|
|
|
|
You can see the assembly
|
|
|
|
code for alarmtest in user/alarmtest.asm, which may be handy
|
|
|
|
for debugging.
|
|
|
|
When you've finished the lab,
|
|
|
|
<tt>alarmtest</tt> should produce output like this:
|
2019-07-25 13:42:36 +00:00
|
|
|
|
|
|
|
<pre>
|
|
|
|
$ alarmtest
|
2019-08-03 11:12:00 +00:00
|
|
|
test0 start
|
2019-08-05 06:04:44 +00:00
|
|
|
......................................alarm!
|
|
|
|
test0 passed
|
2019-08-03 11:12:00 +00:00
|
|
|
test1 start
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
2019-08-05 06:04:44 +00:00
|
|
|
.alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
..alarm!
|
|
|
|
test1 passed
|
2019-08-03 11:12:00 +00:00
|
|
|
$
|
2019-07-25 13:42:36 +00:00
|
|
|
</pre>
|
|
|
|
|
2019-07-26 14:35:21 +00:00
|
|
|
<p>The main challenge will be to arrange that the handler is invoked
|
2019-08-02 15:55:26 +00:00
|
|
|
when the process's alarm interval expires. You'll need to modify
|
|
|
|
usertrap() in kernel/trap.c so that when a
|
|
|
|
process's alarm interval expires, the process executes
|
2019-08-03 11:12:00 +00:00
|
|
|
the handler. How can you do that? You will need to understand
|
|
|
|
how system calls work (i.e., the code in kernel/trampoline.S
|
|
|
|
and kernel/trap.c). Which register contains the address to which
|
|
|
|
system calls return?
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
<p>Your solution will be only a few lines of code, but it may be tricky to
|
|
|
|
get it right.
|
2019-08-05 06:04:44 +00:00
|
|
|
We'll test your code with the version of alarmtest.c in the original
|
|
|
|
repository; if you modify alarmtest.c, make sure your kernel changes
|
|
|
|
cause the original alarmtest to pass the tests.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
<h3>test0: invoke handler</h3>
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
<p>Get started by modifying the kernel to jump to the alarm handler in
|
|
|
|
user space, which will cause test0 to print "alarm!". Don't worry yet
|
|
|
|
what happens after the "alarm!" output; it's OK for now if your
|
|
|
|
program crashes after printing "alarm!". Here are some hints:
|
2019-08-02 18:51:04 +00:00
|
|
|
|
2019-07-25 13:42:36 +00:00
|
|
|
<ul>
|
|
|
|
|
|
|
|
<li>You'll need to modify the Makefile to cause <tt>alarmtest.c</tt>
|
|
|
|
to be compiled as an xv6 user program.
|
|
|
|
|
2019-08-05 06:04:44 +00:00
|
|
|
<li>The right declarations to put in <tt>user/user.h</tt> are:
|
2019-07-25 13:42:36 +00:00
|
|
|
<pre>
|
2019-07-27 01:03:59 +00:00
|
|
|
int sigalarm(int ticks, void (*handler)());
|
2019-08-05 06:04:44 +00:00
|
|
|
int sigreturn(void);
|
2019-07-25 13:42:36 +00:00
|
|
|
</pre>
|
|
|
|
|
2019-08-02 17:18:26 +00:00
|
|
|
<li>Update user/sys.pl (which generates user/usys.S),
|
|
|
|
kernel/syscall.h, and kernel/syscall.c
|
2019-08-05 06:04:44 +00:00
|
|
|
to allow <tt>alarmtest</tt> to invoke the sigalarm and
|
|
|
|
sigreturn system calls.
|
|
|
|
|
|
|
|
<li>For now, your <tt>sys_sigreturn</tt> should just return zero.
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<li>Your <tt>sys_sigalarm()</tt> should store the alarm interval and
|
|
|
|
the pointer to the handler function in new fields in the <tt>proc</tt>
|
2019-08-02 18:51:04 +00:00
|
|
|
structure, defined in <tt>kernel/proc.h</tt>.
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<li>You'll need to keep track of how many ticks have passed since the
|
|
|
|
last call (or are left until the next call) to a process's alarm
|
|
|
|
handler; you'll need a new field in <tt>struct proc</tt> for this
|
|
|
|
too. You can initialize <tt>proc</tt> fields in <tt>allocproc()</tt>
|
2019-07-25 13:42:36 +00:00
|
|
|
in <tt>proc.c</tt>.
|
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<li>Every tick, the hardware clock forces an interrupt, which is handled
|
|
|
|
in <tt>usertrap()</tt>; you should add some code here.
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<li>You only want to manipulate a process's alarm ticks if there's a a
|
|
|
|
timer interrupt; you want something like
|
2019-07-25 13:42:36 +00:00
|
|
|
<pre>
|
2019-07-26 14:35:21 +00:00
|
|
|
if(which_dev == 2) ...
|
2019-07-25 13:42:36 +00:00
|
|
|
</pre>
|
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
<li>Only invoke the alarm function if the process has a
|
2019-07-27 01:03:59 +00:00
|
|
|
timer outstanding. Note that the address of the user's alarm
|
|
|
|
function might be 0 (e.g., in alarmtest.asm, <tt>periodic</tt> is at
|
|
|
|
address 0).
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<li>It will be easier to look at traps with gdb if you tell qemu to
|
|
|
|
use only one CPU, which you can do by running
|
2019-07-25 13:42:36 +00:00
|
|
|
<pre>
|
|
|
|
make CPUS=1 qemu
|
|
|
|
</pre>
|
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
<li>You've succeeded if alarmtest prints "alarm!".
|
2019-08-02 18:51:04 +00:00
|
|
|
|
2019-07-26 14:35:21 +00:00
|
|
|
</ul>
|
2019-07-25 13:42:36 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<h3>test1(): resume interrupted code</h3>
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-03 11:12:00 +00:00
|
|
|
Chances are that alarmtest crashes at some point after it prints
|
|
|
|
"alarm!". Depending on how your solution works, that point may be in
|
|
|
|
test0, or it may be in test1. Crashes are likely caused
|
|
|
|
by the alarm handler (<tt>periodic</tt> in alarmtest.c) returning
|
|
|
|
to the wrong point in the user program.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
Your job now is to ensure that, when the alarm handler is done,
|
|
|
|
control returns to
|
|
|
|
the instruction at which the user program was originally
|
|
|
|
interrupted by the timer interrupt. You must also ensure that
|
|
|
|
the register contents are restored to values they held
|
|
|
|
at the time of the interrupt, so that the user program
|
|
|
|
can continue undisturbed after the alarm.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
<p>Your solution is likely to require you to save and restore
|
|
|
|
registers---what registers do you need to save and restore to resume
|
2019-08-03 11:12:00 +00:00
|
|
|
the interrupted code correctly? (Hint: it will be many).
|
2019-08-05 06:04:44 +00:00
|
|
|
Several approaches are possible; for this lab you should make
|
|
|
|
the <tt>sigreturn</tt> system call
|
|
|
|
restore registers and return to the original
|
2019-08-02 20:22:56 +00:00
|
|
|
interrupted user instruction.
|
2019-08-05 06:04:44 +00:00
|
|
|
The user-space alarm handler
|
|
|
|
calls sigreturn when it is done.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-07-27 01:03:59 +00:00
|
|
|
Some hints:
|
|
|
|
<ul>
|
2019-08-02 20:22:56 +00:00
|
|
|
<li>Have <tt>usertrap</tt> save enough state in
|
|
|
|
<tt>struct proc</tt> when the timer goes off
|
|
|
|
that <tt>sigreturn</tt> can correctly return to the
|
2019-08-03 11:12:00 +00:00
|
|
|
interrupted user code.
|
2019-07-27 01:03:59 +00:00
|
|
|
|
|
|
|
<li>Prevent re-entrant calls to the handler----if a handler hasn't
|
2019-08-02 20:22:56 +00:00
|
|
|
returned yet, the kernel shouldn't call it again.
|
2019-08-02 12:52:36 +00:00
|
|
|
</ul>
|
2019-07-27 01:03:59 +00:00
|
|
|
|
|
|
|
<p>Once you pass <tt>test0</tt> and <tt>test1</tt>, run usertests to
|
|
|
|
make sure you didn't break any other parts of the kernel.
|
2019-07-27 20:00:12 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<h2>Uthread: switching between threads</h2>
|
2019-07-27 01:03:59 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<p>Download <a href="uthread.c">uthread.c</a> and <a
|
|
|
|
href="uthread_switch.S">uthread_switch.S</a> into your xv6 directory.
|
|
|
|
Make sure <tt>uthread_switch.S</tt> ends with <tt>.S</tt>, not
|
|
|
|
<tt>.s</tt>. Add the
|
|
|
|
following rule to the xv6 Makefile after the _forktest rule:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
$U/_uthread: $U/uthread.o $U/uthread_switch.o
|
|
|
|
$(LD) $(LDFLAGS) -N -e main -Ttext 0 -o $U/_uthread $U/uthread.o $U/uthread_switch.o $(ULIB)
|
|
|
|
$(OBJDUMP) -S $U/_uthread > $U/uthread.asm
|
|
|
|
</pre>
|
|
|
|
Make sure that the blank space at the start of each line is a tab,
|
|
|
|
not spaces.
|
2019-07-27 20:00:12 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<p>
|
|
|
|
Add <tt>_uthread</tt> in the Makefile to the list of user programs defined by UPROGS.
|
2019-07-26 14:35:21 +00:00
|
|
|
|
2019-08-02 12:52:36 +00:00
|
|
|
<p>Run xv6, then run <tt>uthread</tt> from the xv6 shell. The xv6 kernel will print an error message about <tt>uthread</tt> encountering a page fault.
|
|
|
|
|
|
|
|
<p>Your job is to complete <tt>uthread_switch.S</tt>, so that you see output similar to
|
|
|
|
this (make sure to run with CPUS=1):
|
|
|
|
<pre>
|
|
|
|
~/classes/6828/xv6$ make CPUS=1 qemu
|
|
|
|
...
|
|
|
|
$ uthread
|
|
|
|
my thread running
|
|
|
|
my thread 0x0000000000002A30
|
|
|
|
my thread running
|
|
|
|
my thread 0x0000000000004A40
|
|
|
|
my thread 0x0000000000002A30
|
|
|
|
my thread 0x0000000000004A40
|
|
|
|
my thread 0x0000000000002A30
|
|
|
|
my thread 0x0000000000004A40
|
|
|
|
my thread 0x0000000000002A30
|
|
|
|
my thread 0x0000000000004A40
|
|
|
|
my thread 0x0000000000002A30
|
|
|
|
...
|
|
|
|
my thread 0x0000000000002A88
|
|
|
|
my thread 0x0000000000004A98
|
|
|
|
my thread: exit
|
|
|
|
my thread: exit
|
|
|
|
thread_schedule: no runnable threads
|
|
|
|
$
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
<p><tt>uthread</tt> creates two threads and switches back and forth between
|
|
|
|
them. Each thread prints "my thread ..." and then yields to give the other
|
|
|
|
thread a chance to run.
|
|
|
|
|
|
|
|
<p>To observe the above output, you need to complete <tt>uthread_switch.S</tt>, but before
|
|
|
|
jumping into <tt>uthread_switch.S</tt>, first understand how <tt>uthread.c</tt>
|
|
|
|
uses <tt>uthread_switch</tt>. <tt>uthread.c</tt> has two global variables
|
|
|
|
<tt>current_thread</tt> and <tt>next_thread</tt>. Each is a pointer to a
|
|
|
|
<tt>thread</tt> structure. The thread structure has a stack for a thread and a
|
|
|
|
saved stack pointer (<tt>sp</tt>, which points into the thread's stack). The
|
|
|
|
job of <tt>uthread_switch</tt> is to save the current thread state into the
|
|
|
|
structure pointed to by <tt>current_thread</tt>, restore <tt>next_thread</tt>'s
|
|
|
|
state, and make <tt>current_thread</tt> point to where <tt>next_thread</tt> was
|
|
|
|
pointing to, so that when <tt>uthread_switch</tt> returns <tt>next_thread</tt>
|
|
|
|
is running and is the <tt>current_thread</tt>.
|
|
|
|
|
|
|
|
<p>You should study <tt>thread_create</tt>, which sets up the initial stack for
|
|
|
|
a new thread. It provides hints about what <tt>uthread_switch</tt> should do.
|
|
|
|
Note that <tt>thread_create</tt> simulates saving all callee-save registers
|
|
|
|
on a new thread's stack.
|
|
|
|
|
|
|
|
<p>To write the assembly in <tt>thread_switch</tt>, you need to know how the C
|
|
|
|
compiler lays out <tt>struct thread</tt> in memory, which is as
|
|
|
|
follows:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
--------------------
|
|
|
|
| 4 bytes for state|
|
|
|
|
--------------------
|
|
|
|
| stack size bytes |
|
|
|
|
| for stack |
|
|
|
|
--------------------
|
|
|
|
| 8 bytes for sp |
|
|
|
|
-------------------- <--- current_thread
|
|
|
|
......
|
|
|
|
|
|
|
|
......
|
|
|
|
--------------------
|
|
|
|
| 4 bytes for state|
|
|
|
|
--------------------
|
|
|
|
| stack size bytes |
|
|
|
|
| for stack |
|
|
|
|
--------------------
|
|
|
|
| 8 bytes for sp |
|
|
|
|
-------------------- <--- next_thread
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
The variables <tt>&next_thread</tt> and <tt>¤t_thread</tt> each
|
|
|
|
contain the address of a pointer to <tt>struct thread</tt>, and are
|
|
|
|
passed to <tt>thread_switch</tt>. The following fragment of assembly
|
|
|
|
will be useful:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
ld t0, 0(a0)
|
|
|
|
sd sp, 0(t0)
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
This saves <tt>sp</tt> in <tt>current_thread->sp</tt>. This works because
|
|
|
|
<tt>sp</tt> is at
|
|
|
|
offset 0 in the struct.
|
|
|
|
You can study the assembly the compiler generates for
|
|
|
|
<tt>uthread.c</tt> by looking at <tt>uthread.asm</tt>.
|
|
|
|
|
|
|
|
<p>To test your code it might be helpful to single step through your
|
|
|
|
<tt>uthread_switch</tt> using <tt>riscv64-linux-gnu-gdb</tt>. You can get started in this way:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
(gdb) file user/_uthread
|
|
|
|
Reading symbols from user/_uthread...
|
|
|
|
(gdb) b *0x230
|
|
|
|
|
|
|
|
</pre>
|
|
|
|
0x230 is the address of uthread_switch (see uthread.asm). When you
|
|
|
|
compile it may be at a different address, so check uthread_asm.
|
|
|
|
You may also be able to type "b uthread_switch". <b>XXX This doesn't work
|
|
|
|
for me; why?</b>
|
|
|
|
|
|
|
|
<p>The breakpoint may (or may not) be triggered before you even run
|
|
|
|
<tt>uthread</tt>. How could that happen?
|
|
|
|
|
|
|
|
<p>Once your xv6 shell runs, type "uthread", and gdb will break at
|
|
|
|
<tt>thread_switch</tt>. Now you can type commands like the following to inspect
|
|
|
|
the state of <tt>uthread</tt>:
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
(gdb) p/x *next_thread
|
|
|
|
$1 = {sp = 0x4a28, stack = {0x0 (repeats 8088 times),
|
|
|
|
0x68, 0x1, 0x0 <repeats 102 times>}, state = 0x1}
|
|
|
|
</pre>
|
|
|
|
What address is <tt>0x168</tt>, which sits on the bottom of the stack
|
|
|
|
of <tt>next_thread</tt>?
|
|
|
|
|
|
|
|
With "x", you can examine the content of a memory location
|
|
|
|
<pre>
|
|
|
|
(gdb) x/x next_thread->sp
|
|
|
|
0x4a28 <all_thread+16304>: 0x00000168
|
|
|
|
</pre>
|
|
|
|
Why does that print <tt>0x168</tt>?
|
|
|
|
|
|
|
|
<h3>Optional challenges</h3>
|
|
|
|
|
|
|
|
<p>The user-level thread package interacts badly with the operating system in
|
|
|
|
several ways. For example, if one user-level thread blocks in a system call,
|
|
|
|
another user-level thread won't run, because the user-level threads scheduler
|
|
|
|
doesn't know that one of its threads has been descheduled by the xv6 scheduler. As
|
|
|
|
another example, two user-level threads will not run concurrently on different
|
|
|
|
cores, because the xv6 scheduler isn't aware that there are multiple
|
|
|
|
threads that could run in parallel. Note that if two user-level threads were to
|
|
|
|
run truly in parallel, this implementation won't work because of several races
|
|
|
|
(e.g., two threads on different processors could call <tt>thread_schedule</tt>
|
|
|
|
concurrently, select the same runnable thread, and both run it on different
|
|
|
|
processors.)
|
|
|
|
|
|
|
|
<p>There are several ways of addressing these problems. One is
|
|
|
|
using <a href="http://en.wikipedia.org/wiki/Scheduler_activations">scheduler
|
|
|
|
activations</a> and another is to use one kernel thread per
|
|
|
|
user-level thread (as Linux kernels do). Implement one of these ways
|
|
|
|
in xv6. This is not easy to get right; for example, you will need to
|
|
|
|
implement TLB shootdown when updating a page table for a
|
|
|
|
multithreaded user process.
|
|
|
|
|
|
|
|
<p>Add locks, condition variables, barriers,
|
|
|
|
etc. to your thread package.
|
|
|
|
|
|
|
|
</body>
|
|
|
|
</html>
|
2019-07-25 13:42:36 +00:00
|
|
|
|