A small fix to allow this test to be run from its original source
directory location, in addition to its installed location.
Change-Id: I4b7afed14ba02b1bea8d9c5f65bc96698a279188
The warnings in test47 seem to be a symptom of a larger problem,
i.e., not an issue with the test set code but rather with the GCC
configuration. Hopefully the switch to LLVM will resolve those.
Change-Id: Ic9fa3b8bc9b728947c993f2e1ed49d9a3b731344
In order to resolve page faults on file-mapped pages, VM may need to
communicate (through VFS) with a file system. The file system must
therefore not be the one to cause, and thus end up being blocked on,
such page faults. To resolve this potential deadlock, the safecopy
system was previously extended with the CPF_TRY flag, which causes the
kernel to return EFAULT to the caller of a safecopy function upon
getting a pagefault, bypassing VM and thus avoiding the loop. VFS was
extended to repeat relevant file system calls that returned EFAULT,
after resolving the page fault, to keep these soft faults from being
exposed to applications.
However, general UNIX I/O semantics dictate that if an I/O transfer
partially succeeded before running into a failure, the partial result
is to be returned. Proper file system implementations may therefore
end up returning partial success rather than the EFAULT code resulting
from a soft fault. Since VFS does not get the EFAULT code in this
case, it does not know that a soft fault occurred, and thus does not
repeat the call either. The end result is that an application may get
partial I/O results (e.g., a short read(2)) even on regular files.
Applications cannot reasonably be expected to deal with this.
Due to the fact that most of the current file system implementations
do not implement proper partial-failure semantics, this problem is not
yet widespread. In fact, it has only occurred on direct block device
I/O so far. However, the next generation of file system services will
be implementing proper I/O semantics, thus exacerbating the problem.
To remedy this situation, this patch changes the CPF_TRY semantics:
whenever the kernel experiences a soft fault during a safecopy call,
in addition to returning FAULT, the kernel also stores a mark in the
grant created with CPF_TRY. Instead of testing on EFAULT, VFS checks
whether the grant was marked, as part of revoking the grant. If the
grant was indeed marked by the kernel, VFS repeats the file system
operation, regardless of its initial return value. Thus, the EFAULT
code now only serves to make the file system fail the call faster.
The approach is currently supported for both direct and magic grants,
but is used only with magic grants - arguably the only case where it
makes sense. Indirect grants should not have CPF_TRY set; in a chain
of indirect grants, the original grant is marked, as it should be.
In order to avoid potential SMP issues, the mark stored in the grant
is its grant identifier, so as to discard outdated kernel writes.
Whether this is necessary or effective remains to be evaluated.
This patch also cleans up the grant structure a bit, removing reserved
space and thus making the structure slightly smaller. The structure
is used internally between system services only, so there is no need
for binary compatibility.
Change-Id: I6bb3990dce67a80146d954546075ceda4d6567f8
When VM asks a file system to provide a block to satisfy a page fault
on a file memory mapping, the file system previously had no way to
inform VM that the block is a hole, since there is no corresponding
block on the underlying device. To work around this, MFS and ext2
would actually allocate a block for the hole when asked by VM, which
not only defeats the point of holes in the first place, but also does
not work on read-only file systems. With this patch, a new libminixfs
call allows the file system to inform VM about holes. This issue does
raise the question as to whether the VM cache is using the right data
structures, since there are now two places where we have to fake a
device offset. This will have to be revisited in the future.
The patch changes file systems accordingly, and adds a test to test74.
Change-Id: Ib537d56b3f30a8eb05bc1f63c92b5c7428d18f4c
This patch employs one solution to resolve two independent but related
issues. Both issues are the result of one fundamental aspect of the
way VM's memory mapping works: VM uses its cache to map in blocks for
memory-mapped file regions, and for blocks already in the VM cache, VM
does not go to the file system before mapping them in. To preserve
consistency between the FS and VM caches, VM relies on being informed
about all updates to file contents through the block cache. The two
issues are both the result of VM not being properly informed about
such updates:
1. Once a file system provides libminixfs with an inode association
(inode number + inode offset) for a disk block, this association
is not broken until a new inode association is provided for it.
If a block is freed and reallocated as a metadata (non-inode)
block, its old association is maintained, and may be supplied to
VM's secondary cache. Due to reuse of inodes, it is possible
that the same inode association becomes valid for an actual file
block again. In that case, when that new file is memory-mapped,
under certain circumstances, VM may end up using the metadata
block to satisfy a page fault on the file, due to the stale inode
association. The result is a corrupted memory mapping, with the
application seeing data other than the current file contents
mapped in at the file block.
2. When a hole is created in a file, the underlying block is freed
from the device, but VM is not informed of this update, and thus,
if VM's cache contains the block with its previous inode
association, this block will remain there. As a result, if an
application subsequently memory-maps the file, VM will map in the
old block at the position of the hole, rather than an all-zeroes
block. Thus, again, the result is a corrupted memory mapping.
This patch resolves both issues by making the file system inform the
minixfs library about blocks being freed, so that libminixfs can
break the inode association for that block, both in its own cache and
in the VM cache. Since libminixfs does not know whether VM has the
block in its cache or not, it makes a call to VM for each block being
freed. Thus, this change introduces more calls to VM, but it solves
the correctness issues at hand; optimizations may be introduced
later. On the upside, all freed blocks are now marked as clean,
which should result in fewer blocks being written back to the device,
and the blocks are removed from the caches entirely, which should
result in slightly better cache usage.
This patch is necessary but not sufficient to resolve the situation
with respect to memory mapping of file holes in general. Therefore,
this patch extends test 74 with a (rather particular but effective)
test for the first issue, but not yet with a test for the second one.
This fixes#90.
Change-Id: Iad8b134d2f88a884f15d3fc303e463280749c467
The original one-shot page patch (git-e321f65) did not account for the
possibility of pagefaults happening while copying memory in the
kernel. This allowed a simple cp(1) from vbfs to hang the system,
since VM was repeatedly requesting the same page from the file system.
With this fix, VM no longer tries to fetch the same memory-mapped page
from VFS more than once per memory handling request from the kernel.
In addition to fixing the original issue, this change should make
handling memory somewhat more robust and ever-so-slightly faster.
Test74 has been extended with a simple test for this case.
Change-Id: I6e565f3750141e51b52ec98c938f8e1aa40070d0
This patch adds (very limited) support for memory-mapping pages on
file systems that are mounted on the special "none" device and that
do not implement PEEK support by themselves. This includes hgfs,
vbfs, and procfs.
The solution is implemented in libvtreefs, and consists of allocating
pages, filling them with content by calling the file system's READ
functionality, passing the pages to VM, and freeing them again. A new
VM flag is used to indicate that these pages should be mapped in only
once, and thus not cached beyond their single use. This prevents
stale data from getting mapped in without the involvement of the file
system, which would be problematic on file systems where file contents
may become outdated at any time. No VM caching means no sharing and
poor performance, but mmap no longer fails on these file systems.
Compared to a libc-based approach, this patch retains the on-demand
nature of mmap. Especially tail(1) is known to map in a large file
area only to use a small portion of it.
All file systems now need to be given permission for the SETCACHEPAGE
and CLEARCACHE calls to VM.
A very basic regression test is added to test74.
Change-Id: I17afc4cb97315b515cad1542521b98f293b6b559