In many systems, malloc() can allocate outside the brk area. The
calculation with sbrk() misses those allocations. When LLgen or ncgg
reported the memory usage, the value was probably too low.
Add USEMALLOC and enable it by default. You can switch back to brk()
by removing `#define USEMALLOC` in memory.c.
USEMALLOC tells the allocator to use malloc() and realloc(), not
brk(). This might help systems where brk() doesn't work, or where
malloc() can allocate outside the brk area.
My build shows no changes in share/ack/examples (except hilo_bas.*).
Option -u was passing an offset from modulptr(0) in ALLOMODL to the
string in argv. If entername() would move ALLOMODL to make room in
ALLOGCHR, then the offset would become invalid, so the string would
get lost. This fix copies the string into ALLOMODL.
This was often not a problem because the initial size of ALLOGCHR in
mach.h is probably large enough for -u. This became a problem when I
caused the initial allocations to fail, and then only because the B
runtime uses -u.
Also move the declarations of `incore` and `core_alloc` to "memory.h".
Also correct SYMDEBUG to SYMDBUG. (I don't know if SYMDBUG works
because our build system never defines it.)
ind_t becomes an alias of size_t. ind_t becomes unsigned, so I edit
some code that was using negative ind_t. Some casts disappear, like
(long)sizeof(...) because the size is already a size_t. There are
changes to overflow checks. Callers with a size too big for size_t
must check it before calling the memory allocator. An overflow check
of BASE + incr in memory.c sbreak() now happens on all platforms, not
only when a pointer is smaller than a long.
My build shows no changes in share/ack/examples (except hilo_bas.*
changing with every build).
Remove some declarations (not all correct) and #include <errno.h>,
<time.h>, and <unistd.h> to get the correct declarations.
Disable mount(2), umount(2), and stime(2) because BSD (around
4.3BSD-Reno) lost compatibility with these Unix v7 functions.
em libmon vanished decades ago (or never existed), and also ass appears to have
a different idea of what the em opcodes are to everything else and gets
confused.
CS eliminates outer expressions before inner ones, as `x * y * z`
before `x * y`. It does this by reversing the order of expressions in
the code. This almost always works, but it sometimes doesn't work if
a STI changes the value number of a LOI. In code like `expr1 LOI
expr2 STI expr2 LOI`, CS might eliminate the inner `expr2` before the
outer `expr2 LOI`. This caused a read after free because the
occurrence of `expr2 LOI` pointed to the eliminated lines of `expr2`.
This bug went unnoticed until my recent changes caused CS to crash
with a double free. I did not get the crash in OpenBSD, but I saw the
crash in Travis, then David Given reproduced the crash in Linux. See
the discussion in https://github.com/davidgiven/ack/pull/73
the -U command line option, and one via file scanning. Turns out only the
second would increment the number of global names, so adding names with -U
would cause names found via scanning to fall off the end of the list! This
wouldn't cause linker errors because fixups don't use the list, but would cause
the generated symbol table in the output to be incorrect.
Enable this in CS for PowerPC; disable it for all other machines.
PowerPC has no remainder instruction; the back end uses division to
compute remainder. If CS finds both a / b and a % b, then CS now
rewrites a % b as a - b * (a / b) and computes a / b only once. This
removes an extra division in the PowerPC code, so it saves both time
and space.
I have not considered whether to enable this optimization for other
machines. It might be less useful in machines with a remainder
instruction. Also, if a % b occurs before a / b, the EM code gets a
DUP. PowerPC ncg handles this DUP well; other back ends might not.
In ego, the CS phase may convert a LAR/SAR to AAR LOI/STI so it can
optimize multiple occurrences of AAR of the same array element. This
conversion should not happen if it would LOI/STI a large or unknown
size.
cs_profit.c okay_lines() checked the size of each occurrence of AAR
except the first. If the first AAR was the implicit AAR in a LAR/SAR,
then the conversion happened without checking the size. For unknown
size, this made a bad LOI -1 or STI -1. Fix by checking the size
earlier: if a LAR/SAR has a bad size, then don't enter it as an AAR.
This Modula-2 code showed the bug. Given M.def:
DEFINITION MODULE M;
TYPE S = SET OF [0..95];
PROCEDURE F(a: ARRAY OF S; i, j: INTEGER);
END M.
and M.mod:
(*$R-*) IMPLEMENTATION MODULE M;
FROM SYSTEM IMPORT ADDRESS, ADR;
PROCEDURE G(s: S; p, q: ADDRESS; t: S); BEGIN
s := s; p := p; q := q; t := t;
END G;
PROCEDURE F(a: ARRAY OF S; i, j: INTEGER); BEGIN
G(a[i + j], ADR(a[i + j]), ADR(a[i + j]), a[i + j])
END F;
END M.
then the bug caused an error:
$ ack -mlinuxppc -O3 -c.e M.mod
/tmp/Ack_b357d.g, line 57: Argument range error
The bug had put LOI -1 in the code, then em_decode got an error
because -1 is out of range for LOI.
Procedure F has 4 occurrences of `a[i + j]`. The size of `a[i + j]`
is 96 bits, or 12 bytes, but the EM code hides the size in an array
descriptor, so the size is unknown to CS. The pragma `(*$R-*)`
disables a range check on `i + j` so CS can work. EM uses AAR for the
2 `ADR(a[i + j])` and LAR for the other 2 `a[i + j]`. EM pushes the
arguments to G in reverse order, so the last `a[i + j]` in Modula-2 is
the first LAR in EM.
CS found 4 occurrences of AAR. The first AAR was an implicit AAR in
LAR. Because of the bug, CS converted this LAR 4 to AAR 4 LOI -1.
- In share/debug.c, undo my mistake in commit 9037d13 by changing
vfprintf back to fprintf in OUTTRACE.
- In ud/ud.c, move the trace output from stdout to stderr, because
stdout has ego's output file, which becomes opt2's input file. If
trace output goes to stdout, it gets prepended to the output file,
and opt2 errors with "wrong input file".
I also edit both build.lua files so ego depends on its header files;
this part isn't needed for -DTRACE.
One can now use -DTRACE by adding it to the cflags in both build.lua
files.
I made a syntax error in some .e file, and em_encode dumped core
because a 64-bit pointer didn't fit in a 32-bit int. Now use stdarg
to pass pointers to error() and fatal().
Stop using the number of errors as the exit status. Many systems use
only the low 8 bits of the exit status, so 256 errors would become 0.
Also change modules/src/print to accept const char *buf