xv6-65oo2/proc.c

575 lines
12 KiB
C
Raw Normal View History

2006-06-12 15:22:12 +00:00
#include "types.h"
2007-08-27 23:26:33 +00:00
#include "param.h"
#include "memlayout.h"
2019-05-31 13:45:59 +00:00
#include "riscv.h"
2006-06-22 20:47:23 +00:00
#include "proc.h"
#include "spinlock.h"
2019-05-31 13:45:59 +00:00
#include "defs.h"
struct {
struct spinlock lock;
struct proc proc[NPROC];
} ptable;
2006-06-12 15:22:12 +00:00
2019-05-31 13:45:59 +00:00
// XXX riscv move somewhere else
struct cpu cpus[NCPU];
struct proc *initproc;
2007-08-23 14:35:28 +00:00
int nextpid = 1;
extern void forkret(void);
// for returning out of the kernel
Checkpoint port of xv6 to x86-64. Passed usertests on 2 processors a few times. The x86-64 doesn't just add two levels to page tables to support 64 bit addresses, but is a different processor. For example, calling conventions, system calls, and segmentation are different from 32-bit x86. Segmentation is basically gone, but gs/fs in combination with MSRs can be used to hold a per-core pointer. In general, x86-64 is more straightforward than 32-bit x86. The port uses code from sv6 and the xv6 "rsc-amd64" branch. A summary of the changes is as follows: - Booting: switch to grub instead of xv6's bootloader (pass -kernel to qemu), because xv6's boot loader doesn't understand 64bit ELF files. And, we don't care anymore about booting. - Makefile: use -m64 instead of -m32 flag for gcc, delete boot loader, xv6.img, bochs, and memfs. For now dont' use -O2, since usertests with -O2 is bigger than MAXFILE! - Update gdb.tmpl to be for i386 or x86-64 - Console/printf: use stdarg.h and treat 64-bit addresses different from ints (32-bit) - Update elfhdr to be 64 bit - entry.S/entryother.S: add code to switch to 64-bit mode: build a simple page table in 32-bit mode before switching to 64-bit mode, share code for entering boot processor and APs, and tweak boot gdt. The boot gdt is the gdt that the kernel proper also uses. (In 64-bit mode, the gdt/segmentation and task state mostly disappear.) - exec.c: fix passing argv (64-bit now instead of 32-bit). - initcode.c: use syscall instead of int. - kernel.ld: load kernel very high, in top terabyte. 64 bits is a lot of address space! - proc.c: initial return is through new syscall path instead of trapret. - proc.h: update struct cpu to have some scratch space since syscall saves less state than int, update struct context to reflect x86-64 calling conventions. - swtch: simplify for x86-64 calling conventions. - syscall: add fetcharg to handle x86-64 calling convetions (6 arguments are passed through registers), and fetchaddr to read a 64-bit value from user space. - sysfile: update to handle pointers from user space (e.g., sys_exec), which are 64 bits. - trap.c: no special trap vector for sys calls, because x86-64 has a different plan for system calls. - trapasm: one plan for syscalls and one plan for traps (interrupt and exceptions). On x86-64, the kernel is responsible for switching user/kernel stacks. To do, xv6 keeps some scratch space in the cpu structure, and uses MSR GS_KERN_BASE to point to the core's cpu structure (using swapgs). - types.h: add uint64, and change pde_t to uint64 - usertests: exit() when fork fails, which helped in tracking down one of the bugs in the switch from 32-bit to 64-bit - vectors: update to make them 64 bits - vm.c: use bootgdt in kernel too, program MSRs for syscalls and core-local state (for swapgs), walk 4 levels in walkpgdir, add DEVSPACETOP, use task segment to set kernel stack for interrupts (but simpler than in 32-bit mode), add an extra argument to freevm (size of user part of address space) to avoid checking all entries till KERNBASE (there are MANY TB before the top 1TB). - x86: update trapframe to have 64-bit entries, which is what the processor pushes on syscalls and traps. simplify lgdt and lidt, using struct desctr, which needs the gcc directives packed and aligned. TODO: - use int32 instead of int? - simplify curproc(). xv6 has per-cpu state again, but this time it must have it. - avoid repetition in walkpgdir - fix validateint() in usertests.c - fix bugs (e.g., observed one a case of entering kernel with invalid gs or proc
2018-09-23 12:24:42 +00:00
extern void sysexit(void);
2006-06-12 15:22:12 +00:00
static void wakeup1(void *chan);
extern char trampout[]; // trampoline.S
2019-05-31 13:45:59 +00:00
void
2019-05-31 13:45:59 +00:00
procinit(void)
{
initlock(&ptable.lock, "ptable");
}
2019-05-31 13:45:59 +00:00
// Must be called with interrupts disabled.
// XXX riscv
int
cpuid() {
2019-05-31 13:45:59 +00:00
return 0;
}
2019-05-31 13:45:59 +00:00
// Return this core's cpu struct.
// XXX riscv
2018-10-10 00:22:48 +00:00
struct cpu*
mycpu(void) {
struct cpu *c;
2019-05-31 13:45:59 +00:00
c = &cpus[0];
2018-10-10 00:22:48 +00:00
return c;
}
// Disable interrupts so that we are not rescheduled
// while reading proc from the cpu structure
2019-05-31 13:45:59 +00:00
// XXX riscv
struct proc*
myproc(void) {
2019-05-31 13:45:59 +00:00
return cpus[0].proc;
}
2009-09-03 07:46:15 +00:00
//PAGEBREAK: 32
// Look in the process table for an UNUSED proc.
// If found, change state to EMBRYO and initialize
// state required to run in the kernel.
// Otherwise return 0.
static struct proc*
allocproc(void)
2006-06-12 15:22:12 +00:00
{
struct proc *p;
2006-06-12 15:22:12 +00:00
acquire(&ptable.lock);
2009-07-12 02:28:29 +00:00
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
if(p->state == UNUSED)
goto found;
release(&ptable.lock);
return 0;
found:
2009-07-12 02:28:29 +00:00
p->state = EMBRYO;
p->pid = nextpid++;
release(&ptable.lock);
2019-05-31 13:45:59 +00:00
// Allocate a page for the kernel stack.
if((p->kstack = kalloc()) == 0){
p->state = UNUSED;
return 0;
}
2019-05-31 13:45:59 +00:00
// Allocate a trapframe page.
if((p->tf = (struct trapframe *)kalloc()) == 0){
p->state = UNUSED;
return 0;
}
2019-05-31 13:45:59 +00:00
// An empty user page table.
p->pagetable = proc_pagetable(p);
// Set up new context to start executing at forkret,
// which returns to user space.
memset(&p->context, 0, sizeof p->context);
p->context.ra = (uint64)forkret;
p->context.sp = (uint64)p->kstack + PGSIZE;
return p;
}
// Create a page table for a given process,
// with no users pages, but with trampoline pages.
// Called both when creating a process, and
// by exec() when building tentative new memory image,
// which might fail.
pagetable_t
proc_pagetable(struct proc *p)
{
pagetable_t pagetable;
// An empty user page table.
pagetable = uvmcreate();
2019-05-31 13:45:59 +00:00
// map the trampoline code (for system call return)
// at the highest user virtual address.
// only the supervisor uses it, on the way
// to/from user space, so not PTE_U.
mappages(pagetable, TRAMPOLINE, PGSIZE,
(uint64)trampout, PTE_R | PTE_X);
2019-05-31 13:45:59 +00:00
// map the trapframe, for trampoline.S.
mappages(pagetable, (TRAMPOLINE - PGSIZE), PGSIZE,
2019-05-31 13:45:59 +00:00
(uint64)(p->tf), PTE_R | PTE_W);
2009-07-12 02:28:29 +00:00
return pagetable;
}
// Free a process's page table, and free the
// physical memory the page table refers to.
// Called both when a process exits and from
// exec() if it fails.
void
proc_freepagetable(pagetable_t pagetable, uint64 sz)
{
unmappages(pagetable, TRAMPOLINE, PGSIZE, 0);
unmappages(pagetable, TRAMPOLINE-PGSIZE, PGSIZE, 0);
uvmfree(pagetable, sz);
2006-06-12 15:22:12 +00:00
}
2019-05-31 15:45:42 +00:00
// a user program that calls exec("/init")
// od -t xC initcode
2019-05-31 13:45:59 +00:00
unsigned char initcode[] = {
2019-05-31 15:45:42 +00:00
0x17, 0x05, 0x00, 0x00, 0x13, 0x05, 0x05, 0x02, 0x97, 0x05, 0x00, 0x00, 0x93, 0x85, 0x05, 0x02,
0x9d, 0x48, 0x73, 0x00, 0x00, 0x00, 0x89, 0x48, 0x73, 0x00, 0x00, 0x00, 0xef, 0xf0, 0xbf, 0xff,
0x2f, 0x69, 0x6e, 0x69, 0x74, 0x00, 0x00, 0x01, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00
2019-05-31 13:45:59 +00:00
};
2009-09-03 07:46:15 +00:00
//PAGEBREAK: 32
// Set up first user process.
void
userinit(void)
{
struct proc *p;
p = allocproc();
initproc = p;
2019-05-31 13:45:59 +00:00
uvminit(p->pagetable, initcode, sizeof(initcode));
p->sz = PGSIZE;
2019-05-31 13:45:59 +00:00
// prepare for the very first kernel->user.
p->tf->epc = 0;
p->tf->sp = PGSIZE;
safestrcpy(p->name, "initcode", sizeof(p->name));
2019-05-31 15:45:42 +00:00
p->cwd = namei("/");
// this assignment to p->state lets other cores
// run this process. the acquire forces the above
// writes to be visible, and the lock is also needed
// because the assignment might not be atomic.
acquire(&ptable.lock);
p->state = RUNNABLE;
release(&ptable.lock);
}
2019-05-31 13:45:59 +00:00
#if 0
2006-09-08 14:26:51 +00:00
// Grow current process's memory by n bytes.
// Return 0 on success, -1 on failure.
2006-09-08 14:26:51 +00:00
int
growproc(int n)
{
uint sz;
2019-05-31 13:45:59 +00:00
struct proc *p = myproc();
2019-05-31 13:45:59 +00:00
sz = p->sz;
if(n > 0){
2019-05-31 13:45:59 +00:00
if((sz = allocuvm(p->pagetable, sz, sz + n)) == 0)
return -1;
} else if(n < 0){
2019-05-31 13:45:59 +00:00
if((sz = uvmdealloc(p->pagetable, sz, sz + n)) == 0)
return -1;
}
2019-05-31 13:45:59 +00:00
p->sz = sz;
switchuvm(p);
return 0;
2006-09-08 14:26:51 +00:00
}
2019-05-31 13:45:59 +00:00
#endif
2006-09-08 14:26:51 +00:00
2019-05-31 13:45:59 +00:00
// Create a new process, copying p as the parent.
// Sets up child kernel stack to return as if from system call.
2009-05-31 00:38:51 +00:00
int
fork(void)
2006-06-12 15:22:12 +00:00
{
2009-05-31 00:38:51 +00:00
int i, pid;
2006-06-12 15:22:12 +00:00
struct proc *np;
2019-05-31 13:45:59 +00:00
struct proc *p = myproc();
2006-06-12 15:22:12 +00:00
// Allocate process.
if((np = allocproc()) == 0){
2009-05-31 00:38:51 +00:00
return -1;
}
2019-05-31 13:45:59 +00:00
// Copy user memory from parent to child.
uvmcopy(p->pagetable, np->pagetable, p->sz);
np->sz = p->sz;
2006-09-06 17:27:19 +00:00
2019-05-31 13:45:59 +00:00
np->parent = p;
2019-05-31 13:45:59 +00:00
// copy saved user registers.
*(np->tf) = *(p->tf);
// Cause fork to return 0 in the child.
np->tf->a0 = 0;
// increment reference counts on open file descriptors.
2009-05-31 00:38:51 +00:00
for(i = 0; i < NOFILE; i++)
2019-05-31 13:45:59 +00:00
if(p->ofile[i])
np->ofile[i] = filedup(p->ofile[i]);
np->cwd = idup(p->cwd);
2019-05-31 13:45:59 +00:00
safestrcpy(np->name, p->name, sizeof(p->name));
2009-05-31 00:38:51 +00:00
pid = np->pid;
acquire(&ptable.lock);
2009-05-31 00:38:51 +00:00
np->state = RUNNABLE;
release(&ptable.lock);
2009-05-31 00:38:51 +00:00
return pid;
2006-06-12 15:22:12 +00:00
}
// Exit the current process. Does not return.
// An exited process remains in the zombie state
2019-05-31 13:45:59 +00:00
// until its parent calls wait().
void
exit(void)
{
2019-05-31 13:45:59 +00:00
struct proc *p = myproc();
struct proc *pp;
int fd;
2019-05-31 13:45:59 +00:00
if(p == initproc)
panic("init exiting");
// Close all open files.
for(fd = 0; fd < NOFILE; fd++){
2019-05-31 13:45:59 +00:00
if(p->ofile[fd]){
fileclose(p->ofile[fd]);
p->ofile[fd] = 0;
}
}
2014-08-27 21:15:30 +00:00
begin_op();
2019-05-31 13:45:59 +00:00
iput(p->cwd);
2014-08-27 21:15:30 +00:00
end_op();
2019-05-31 13:45:59 +00:00
p->cwd = 0;
acquire(&ptable.lock);
// Parent might be sleeping in wait().
2019-05-31 13:45:59 +00:00
wakeup1(p->parent);
// Pass abandoned children to init.
2019-05-31 13:45:59 +00:00
for(pp = ptable.proc; pp < &ptable.proc[NPROC]; pp++){
if(pp->parent == p){
pp->parent = initproc;
if(pp->state == ZOMBIE)
wakeup1(initproc);
}
}
// Jump into the scheduler, never to return.
2019-05-31 13:45:59 +00:00
p->state = ZOMBIE;
sched();
panic("zombie exit");
}
// Wait for a child process to exit and return its pid.
// Return -1 if this process has no children.
int
wait(void)
{
2019-05-31 13:45:59 +00:00
struct proc *np;
int havekids, pid;
2019-05-31 13:45:59 +00:00
struct proc *p = myproc();
acquire(&ptable.lock);
for(;;){
// Scan through table looking for exited children.
havekids = 0;
2019-05-31 13:45:59 +00:00
for(np = ptable.proc; np < &ptable.proc[NPROC]; np++){
if(np->parent != p)
continue;
havekids = 1;
2019-05-31 13:45:59 +00:00
if(np->state == ZOMBIE){
// Found one.
2019-05-31 13:45:59 +00:00
pid = np->pid;
kfree(np->kstack);
np->kstack = 0;
kfree((void*)np->tf);
np->tf = 0;
proc_freepagetable(np->pagetable, np->sz);
2019-05-31 13:45:59 +00:00
np->pagetable = 0;
np->pid = 0;
np->parent = 0;
np->name[0] = 0;
np->killed = 0;
np->state = UNUSED;
release(&ptable.lock);
return pid;
}
}
// No point waiting if we don't have any children.
2019-05-31 13:45:59 +00:00
if(!havekids || p->killed){
release(&ptable.lock);
return -1;
}
// Wait for children to exit. (See wakeup1 call in proc_exit.)
2019-05-31 13:45:59 +00:00
sleep(p, &ptable.lock); //DOC: wait-sleep
}
}
2006-09-07 14:12:30 +00:00
//PAGEBREAK: 42
2006-09-06 17:27:19 +00:00
// Per-CPU process scheduler.
// Each CPU calls scheduler() after setting itself up.
// Scheduler never returns. It loops, doing:
// - choose a process to run
2007-08-30 17:39:56 +00:00
// - swtch to start running that process
// - eventually that process transfers control
// via swtch back to the scheduler.
2006-06-12 15:22:12 +00:00
void
2006-07-11 01:07:40 +00:00
scheduler(void)
2006-06-12 15:22:12 +00:00
{
struct proc *p;
struct cpu *c = mycpu();
c->proc = 0;
for(;;){
2009-07-12 02:28:29 +00:00
// Enable interrupts on this processor.
2019-05-31 13:45:59 +00:00
// XXX riscv
//sti();
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){
if(p->state != RUNNABLE)
continue;
2006-09-06 17:27:19 +00:00
// Switch to chosen process. It is the process's job
// to release ptable.lock and then reacquire it
// before jumping back to us.
c->proc = p;
p->state = RUNNING;
2019-05-31 13:45:59 +00:00
printf("switch...\n");
swtch(&c->scheduler, &p->context);
printf("switch returned\n");
2006-09-06 17:27:19 +00:00
// Process is done running for now.
// It should have changed its p->state before coming back.
c->proc = 0;
}
release(&ptable.lock);
2006-06-12 15:22:12 +00:00
}
}
2006-06-12 15:22:12 +00:00
2009-07-12 02:28:29 +00:00
// Enter scheduler. Must hold only ptable.lock
// and have changed proc->state. Saves and restores
// intena because intena is a property of this
// kernel thread, not this CPU. It should
// be proc->intena and proc->ncli, but that would
// break in the few places where a lock is held but
// there's no process.
void
sched(void)
{
int intena;
struct proc *p = myproc();
if(!holding(&ptable.lock))
panic("sched ptable.lock");
if(p->state == RUNNING)
2009-07-12 02:28:29 +00:00
panic("sched running");
intena = mycpu()->intena;
2019-05-31 13:45:59 +00:00
swtch(&p->context, &mycpu()->scheduler);
mycpu()->intena = intena;
2006-07-11 01:07:40 +00:00
}
// Give up the CPU for one scheduling round.
2006-07-11 01:07:40 +00:00
void
yield(void)
2006-07-11 01:07:40 +00:00
{
2009-07-12 02:28:29 +00:00
acquire(&ptable.lock); //DOC: yieldlock
myproc()->state = RUNNABLE;
sched();
release(&ptable.lock);
2006-06-12 15:22:12 +00:00
}
2006-06-15 19:58:01 +00:00
2006-08-29 21:35:30 +00:00
// A fork child's very first scheduling by scheduler()
2019-05-31 13:45:59 +00:00
// will swtch to forkret.
void
forkret(void)
{
2019-05-31 13:45:59 +00:00
struct proc *p = myproc();
static int first = 1;
// Still holding ptable.lock from scheduler.
release(&ptable.lock);
2019-05-31 13:45:59 +00:00
printf("entering forkret\n");
if (first) {
2011-08-23 00:07:18 +00:00
// Some initialization functions must be run in the context
// of a regular process (e.g., they call sleep), and thus cannot
2011-08-23 00:07:18 +00:00
// be run from main().
first = 0;
2019-05-31 15:45:42 +00:00
iinit(ROOTDEV);
initlog(ROOTDEV);
}
2019-05-31 13:45:59 +00:00
usertrapret();
}
// Atomically release lock and sleep on chan.
// Reacquires lock when awakened.
2006-06-15 19:58:01 +00:00
void
sleep(void *chan, struct spinlock *lk)
2006-06-15 19:58:01 +00:00
{
struct proc *p = myproc();
if(p == 0)
2006-07-11 01:07:40 +00:00
panic("sleep");
2006-07-17 05:00:25 +00:00
if(lk == 0)
panic("sleep without lk");
// Must acquire ptable.lock in order to
// change p->state and then call sched.
// Once we hold ptable.lock, we can be
// guaranteed that we won't miss any wakeup
// (wakeup runs with ptable.lock locked),
// so it's okay to release lk.
2009-07-13 01:33:37 +00:00
if(lk != &ptable.lock){ //DOC: sleeplock0
acquire(&ptable.lock); //DOC: sleeplock1
release(lk);
}
// Go to sleep.
p->chan = chan;
p->state = SLEEPING;
sched();
// Tidy up.
p->chan = 0;
// Reacquire original lock.
2009-07-13 01:33:37 +00:00
if(lk != &ptable.lock){ //DOC: sleeplock2
release(&ptable.lock);
acquire(lk);
}
2006-06-15 19:58:01 +00:00
}
//PAGEBREAK!
// Wake up all processes sleeping on chan.
// The ptable lock must be held.
2007-08-24 20:22:55 +00:00
static void
wakeup1(void *chan)
2006-06-15 19:58:01 +00:00
{
struct proc *p;
2009-05-31 05:13:51 +00:00
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
if(p->state == SLEEPING && p->chan == chan)
2006-06-15 19:58:01 +00:00
p->state = RUNNABLE;
}
// Wake up all processes sleeping on chan.
void
wakeup(void *chan)
{
acquire(&ptable.lock);
wakeup1(chan);
release(&ptable.lock);
2006-06-15 19:58:01 +00:00
}
2019-05-31 13:45:59 +00:00
#if 0
// Kill the process with the given pid.
// Process won't exit until it returns
// to user space (see trap in trap.c).
int
kill(int pid)
{
struct proc *p;
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){
if(p->pid == pid){
p->killed = 1;
// Wake process from sleep if necessary.
if(p->state == SLEEPING)
p->state = RUNNABLE;
release(&ptable.lock);
return 0;
}
}
release(&ptable.lock);
return -1;
}
//PAGEBREAK: 36
// Print a process listing to console. For debugging.
// Runs when user types ^P on console.
// No lock to avoid wedging a stuck machine further.
void
procdump(void)
{
static char *states[] = {
[UNUSED] "unused",
[EMBRYO] "embryo",
[SLEEPING] "sleep ",
[RUNNABLE] "runble",
[RUNNING] "run ",
[ZOMBIE] "zombie"
};
int i;
struct proc *p;
char *state;
Checkpoint port of xv6 to x86-64. Passed usertests on 2 processors a few times. The x86-64 doesn't just add two levels to page tables to support 64 bit addresses, but is a different processor. For example, calling conventions, system calls, and segmentation are different from 32-bit x86. Segmentation is basically gone, but gs/fs in combination with MSRs can be used to hold a per-core pointer. In general, x86-64 is more straightforward than 32-bit x86. The port uses code from sv6 and the xv6 "rsc-amd64" branch. A summary of the changes is as follows: - Booting: switch to grub instead of xv6's bootloader (pass -kernel to qemu), because xv6's boot loader doesn't understand 64bit ELF files. And, we don't care anymore about booting. - Makefile: use -m64 instead of -m32 flag for gcc, delete boot loader, xv6.img, bochs, and memfs. For now dont' use -O2, since usertests with -O2 is bigger than MAXFILE! - Update gdb.tmpl to be for i386 or x86-64 - Console/printf: use stdarg.h and treat 64-bit addresses different from ints (32-bit) - Update elfhdr to be 64 bit - entry.S/entryother.S: add code to switch to 64-bit mode: build a simple page table in 32-bit mode before switching to 64-bit mode, share code for entering boot processor and APs, and tweak boot gdt. The boot gdt is the gdt that the kernel proper also uses. (In 64-bit mode, the gdt/segmentation and task state mostly disappear.) - exec.c: fix passing argv (64-bit now instead of 32-bit). - initcode.c: use syscall instead of int. - kernel.ld: load kernel very high, in top terabyte. 64 bits is a lot of address space! - proc.c: initial return is through new syscall path instead of trapret. - proc.h: update struct cpu to have some scratch space since syscall saves less state than int, update struct context to reflect x86-64 calling conventions. - swtch: simplify for x86-64 calling conventions. - syscall: add fetcharg to handle x86-64 calling convetions (6 arguments are passed through registers), and fetchaddr to read a 64-bit value from user space. - sysfile: update to handle pointers from user space (e.g., sys_exec), which are 64 bits. - trap.c: no special trap vector for sys calls, because x86-64 has a different plan for system calls. - trapasm: one plan for syscalls and one plan for traps (interrupt and exceptions). On x86-64, the kernel is responsible for switching user/kernel stacks. To do, xv6 keeps some scratch space in the cpu structure, and uses MSR GS_KERN_BASE to point to the core's cpu structure (using swapgs). - types.h: add uint64, and change pde_t to uint64 - usertests: exit() when fork fails, which helped in tracking down one of the bugs in the switch from 32-bit to 64-bit - vectors: update to make them 64 bits - vm.c: use bootgdt in kernel too, program MSRs for syscalls and core-local state (for swapgs), walk 4 levels in walkpgdir, add DEVSPACETOP, use task segment to set kernel stack for interrupts (but simpler than in 32-bit mode), add an extra argument to freevm (size of user part of address space) to avoid checking all entries till KERNBASE (there are MANY TB before the top 1TB). - x86: update trapframe to have 64-bit entries, which is what the processor pushes on syscalls and traps. simplify lgdt and lidt, using struct desctr, which needs the gcc directives packed and aligned. TODO: - use int32 instead of int? - simplify curproc(). xv6 has per-cpu state again, but this time it must have it. - avoid repetition in walkpgdir - fix validateint() in usertests.c - fix bugs (e.g., observed one a case of entering kernel with invalid gs or proc
2018-09-23 12:24:42 +00:00
uint64 pc[10];
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){
if(p->state == UNUSED)
continue;
if(p->state >= 0 && p->state < NELEM(states) && states[p->state])
state = states[p->state];
else
state = "???";
2019-05-31 13:45:59 +00:00
printf("%d %s %s", p->pid, state, p->name);
if(p->state == SLEEPING){
2018-10-03 22:13:51 +00:00
getcallerpcs((uint64*)p->context->rbp+2, pc);
for(i=0; i<10 && pc[i] != 0; i++)
2019-05-31 13:45:59 +00:00
printf(" %p", pc[i]);
}
2019-05-31 13:45:59 +00:00
printf("\n");
}
}
2019-05-31 13:45:59 +00:00
#endif