Now that there are services other than PM and VFS that implement
userland system calls directly, these services may need to know about
events related to user processes. In particular, signal delivery may
have to interrupt blocking system calls, and certain cleanup tasks may
have to be performed after a user process exits.
This patch aims to implement a generic, lasting solution for this
problem, by allowing services to subscribe to "signal delivered"
and/or "process exit" events from PM. PM publishes such events by
sending messages to its subscribed services, which must then reply an
acknowledgment message.
For now, only the two aforementioned events are implemented, and only
the IPC service makes use of the process event facility.
The new process event publish/subscribe system replaces the previous
VM notify-sig/watch-exit/query-exit system, which was unsound: 1) it
allowed subscription to events from individual processes, and suffered
from fundamental race conditions as a result; 2) it relied on "not too
many" processes making use of the IPC server functionality in order to
avoid loss of notifications. In addition, it had the "ipc" process
name hardcoded, did not distinguish between signal delivery and exits,
and added a roundtrip to VM for all events from all processes.
Change-Id: I75ebad4bc54e646c6433f473294cb4003b2c3430
Until now, the program name of a service was always the file name
(without directory) of the service binary. The program name is used
to, among other things, find the corresponding system.conf entry.
With ASR moving to a situation where all rerandomized service binaries
are stored in a single directory, this can no longer be maintained.
Instead, the service(8) command can now be instructed to override the
service program name, using its new -progname option.
Change-Id: I981e9b35232c88048d8804ec5eca58d1e4a5db82
- the userland call is now made to PM only, and PM relays the call to
other servers as appropriate; this is an ABI change that will
ultimately allow us to add proper support for wait3() and the like;
for the moment there is backward compatibility;
- the getrusage-specific kernel subcall has been removed, as it
provided only redundant functionality, and did not provide the means
to be extended correctly in the future - namely, allowing the kernel
to return different values depending on whether resource usage of
the caller (self) or its children was requested;
- VM is now told whether resource usage of the caller (self) or its
children is requested, and it refrains from filling in wrong values
for information it does not have;
- VM now uses the correct unit for the ru_maxrss values;
- VFS is cut out of the loop entirely, since it does not provide any
values at the moment; a comment explains how it should be readded.
Change-Id: I27b0f488437dec3d8e784721c67b03f2f853120f
This commits adds a basic infrastructure to support Address Space
Randomization (ASR). In a nutshell, using the already imported ASR
LLVM pass, multiple versions can be generated for the same system
service, each with a randomized, different address space layout.
Combined with the magic instrumentation for state transfer, a system
service can be live updated into another ASR-randomized version at
runtime, thus providing live rerandomization.
Since MINIX3 is not yet capable of running LLVM linker passes, the
ASR-randomized service binaries have to be pregenerated during
crosscompilation. These pregenerated binaries can then be cycled
through at runtime. This patch provides the basic proof-of-concept
infrastructure for both these parts.
In order to support pregeneration, the clientctl host script has
been extended with a "buildasr" command. It is to be used after
building the entire system with bitcode and magic support, and will
produce a given number of ASR-randomized versions of all system
services. These services are placed in /usr/service/asr in the
image that is generated as final step by the "buildasr" command.
In order to support runtime updating, a new update_asr(8) command
has been added to MINIX3. This command attempts to live-update the
running system services into their next ASR-randomized versions.
For now, this command is not run automatically, and thus must be
invoked manually.
Technical notes:
- For various reasons, magic instrumentation is x86-only for now,
and ASR functionality is therefore to be used on x86 only as well.
- The ASR-randomized binaries are placed in numbered subdirectories
so as not to have to change their actual program names, which are
assumed to be static in various places (system.conf, procfs).
- The root partition is typically too small to contain all the
produced binaries, which is why we introduce /usr/service. There
is a symlink from /service/asr to /usr/service/asr for no other
reason than to let userland continue to assume that all services
are reachable through /service.
- The ASR count field (r_asr_count/ASRcount) maintained by RS is not
used within RS in any way; it is only passed through procfs to
userland in order to allow update_asr(8) to keep track of which
version is currently loaded without having to maintain own state.
- Ideally, pre-instrumentation linking of a service would remove all
its randomized versions. Currently, the user is assumed not to
perform ASR instrumentation and then recompile system services
without performing ASR instrumentation again, as the randomized
binaries included in the image would then be stale. This aspect
has to be improved later.
- Various other issues are flagged in the comments of the various
parts of this patch.
Change-Id: I093ad57f31c18305591f64b2d491272288aa0937
Due to changed VM internals, more elaborate preparation is required
before a live update with multiple components including VM can take
place. This patch adds the essential preparation infrastructure to
VM and adapts RS to make use of it. As a side effect, it is no
longer necessary to supply RS as the last component (if at all)
during the set-up of a multicomponent live update operation.
Change-Id: If069fd3f93f96f9d5433998e4615f861465ef448
This patch employs one solution to resolve two independent but related
issues. Both issues are the result of one fundamental aspect of the
way VM's memory mapping works: VM uses its cache to map in blocks for
memory-mapped file regions, and for blocks already in the VM cache, VM
does not go to the file system before mapping them in. To preserve
consistency between the FS and VM caches, VM relies on being informed
about all updates to file contents through the block cache. The two
issues are both the result of VM not being properly informed about
such updates:
1. Once a file system provides libminixfs with an inode association
(inode number + inode offset) for a disk block, this association
is not broken until a new inode association is provided for it.
If a block is freed and reallocated as a metadata (non-inode)
block, its old association is maintained, and may be supplied to
VM's secondary cache. Due to reuse of inodes, it is possible
that the same inode association becomes valid for an actual file
block again. In that case, when that new file is memory-mapped,
under certain circumstances, VM may end up using the metadata
block to satisfy a page fault on the file, due to the stale inode
association. The result is a corrupted memory mapping, with the
application seeing data other than the current file contents
mapped in at the file block.
2. When a hole is created in a file, the underlying block is freed
from the device, but VM is not informed of this update, and thus,
if VM's cache contains the block with its previous inode
association, this block will remain there. As a result, if an
application subsequently memory-maps the file, VM will map in the
old block at the position of the hole, rather than an all-zeroes
block. Thus, again, the result is a corrupted memory mapping.
This patch resolves both issues by making the file system inform the
minixfs library about blocks being freed, so that libminixfs can
break the inode association for that block, both in its own cache and
in the VM cache. Since libminixfs does not know whether VM has the
block in its cache or not, it makes a call to VM for each block being
freed. Thus, this change introduces more calls to VM, but it solves
the correctness issues at hand; optimizations may be introduced
later. On the upside, all freed blocks are now marked as clean,
which should result in fewer blocks being written back to the device,
and the blocks are removed from the caches entirely, which should
result in slightly better cache usage.
This patch is necessary but not sufficient to resolve the situation
with respect to memory mapping of file holes in general. Therefore,
this patch extends test 74 with a (rather particular but effective)
test for the first issue, but not yet with a test for the second one.
This fixes#90.
Change-Id: Iad8b134d2f88a884f15d3fc303e463280749c467