summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorRichard Yao <[email protected]>2014-08-04 07:09:32 -0400
committerBrian Behlendorf <[email protected]>2014-09-05 15:11:43 -0700
commitcd3939c5f06945a3883a362379d0c12e57f31a4d (patch)
tree69f3061f7db8b8e93625151146458de3b1eac6d1
parent1ca56e603395b2d84c8043d5ff18f2082f57e6f1 (diff)
Linux AIO Support
nfsd uses do_readv_writev() to implement fops->read and fops->write. do_readv_writev() will attempt to read/write using fops->aio_read and fops->aio_write, but it will fallback to fops->read and fops->write when AIO is not available. However, the fallback will perform a call for each individual data page. Since our default recordsize is 128KB, sequential operations on NFS will generate 32 DMU transactions where only 1 transaction was needed. That was unnecessary overhead and we implement fops->aio_read and fops->aio_write to eliminate it. ZFS originated in OpenSolaris, where the AIO API is entirely implemented in userland's libc by intelligently mapping them to VOP_WRITE, VOP_READ and VOP_FSYNC. Linux implements AIO inside the kernel itself. Linux filesystems therefore must implement their own AIO logic and nearly all of them implement fops->aio_write synchronously. Consequently, they do not implement aio_fsync(). However, since the ZPL works by mapping Linux's VFS calls to the functions implementing Illumos' VFS operations, we instead implement AIO in the kernel by mapping the operations to the VOP_READ, VOP_WRITE and VOP_FSYNC equivalents. We therefore implement fops->aio_fsync. One might be inclined to make our fops->aio_write implementation synchronous to make software that expects this behavior safe. However, there are several reasons not to do this: 1. Other platforms do not implement aio_write() synchronously and since the majority of userland software using AIO should be cross platform, expectations of synchronous behavior should not be a problem. 2. We would hurt the performance of programs that use POSIX interfaces properly while simultaneously encouraging the creation of more non-compliant software. 3. The broader community concluded that userland software should be patched to properly use POSIX interfaces instead of implementing hacks in filesystems to cater to broken software. This concept is best described as the O_PONIES debate. 4. Making an asynchronous write synchronous is non sequitur. Any software dependent on synchronous aio_write behavior will suffer data loss on ZFSOnLinux in a kernel panic / system failure of at most zfs_txg_timeout seconds, which by default is 5 seconds. This seems like a reasonable consequence of using non-compliant software. It should be noted that this is also a problem in the kernel itself where nfsd does not pass O_SYNC on files opened with it and instead relies on a open()/write()/close() to enforce synchronous behavior when the flush is only guarenteed on last close. Exporting any filesystem that does not implement AIO via NFS risks data loss in the event of a kernel panic / system failure when something else is also accessing the file. Exporting any file system that implements AIO the way this patch does bears similar risk. However, it seems reasonable to forgo crippling our AIO implementation in favor of developing patches to fix this problem in Linux's nfsd for the reasons stated earlier. In the interim, the risk will remain. Failing to implement AIO will not change the problem that nfsd created, so there is no reason for nfsd's mistake to block our implementation of AIO. It also should be noted that `aio_cancel()` will always return `AIO_NOTCANCELED` under this implementation. It is possible to implement aio_cancel by deferring work to taskqs and use `kiocb_set_cancel_fn()` to set a callback function for cancelling work sent to taskqs, but the simpler approach is allowed by the specification: ``` Which operations are cancelable is implementation-defined. ``` http://pubs.opengroup.org/onlinepubs/009695399/functions/aio_cancel.html The only programs on my system that are capable of using `aio_cancel()` are QEMU, beecrypt and fio use it according to a recursive grep of my system's `/usr/src/debug`. That suggests that `aio_cancel()` users are rare. Implementing aio_cancel() is left to a future date when it is clear that there are consumers that benefit from its implementation to justify the work. Lastly, it is important to know that handling of the iovec updates differs between Illumos and Linux in the implementation of read/write. On Linux, it is the VFS' responsibility whle on Illumos, it is the filesystem's responsibility. We take the intermediate solution of copying the iovec so that the ZFS code can update it like on Solaris while leaving the originals alone. This imposes some overhead. We could always revisit this should profiling show that the allocations are a problem. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #223 Closes #2373
-rw-r--r--include/sys/zpl.h7
-rw-r--r--module/zfs/zfs_replay.c2
-rw-r--r--module/zfs/zpl_file.c150
-rw-r--r--module/zfs/zpl_xattr.c6
4 files changed, 126 insertions, 39 deletions
diff --git a/include/sys/zpl.h b/include/sys/zpl.h
index 56bd9ae5e..eb0e9f057 100644
--- a/include/sys/zpl.h
+++ b/include/sys/zpl.h
@@ -33,6 +33,7 @@
#include <linux/writeback.h>
#include <linux/falloc.h>
#include <linux/task_io_accounting_ops.h>
+#include <linux/aio.h>
/* zpl_inode.c */
extern void zpl_vap_init(vattr_t *vap, struct inode *dir,
@@ -46,9 +47,11 @@ extern dentry_operations_t zpl_dentry_operations;
/* zpl_file.c */
extern ssize_t zpl_read_common(struct inode *ip, const char *buf,
- size_t len, loff_t pos, uio_seg_t segment, int flags, cred_t *cr);
+ size_t len, loff_t *ppos, uio_seg_t segment, int flags,
+ cred_t *cr);
extern ssize_t zpl_write_common(struct inode *ip, const char *buf,
- size_t len, loff_t pos, uio_seg_t segment, int flags, cred_t *cr);
+ size_t len, loff_t *ppos, uio_seg_t segment, int flags,
+ cred_t *cr);
extern long zpl_fallocate_common(struct inode *ip, int mode,
loff_t offset, loff_t len);
diff --git a/module/zfs/zfs_replay.c b/module/zfs/zfs_replay.c
index 6ac10e262..0ca1e03b5 100644
--- a/module/zfs/zfs_replay.c
+++ b/module/zfs/zfs_replay.c
@@ -673,7 +673,7 @@ zfs_replay_write(zfs_sb_t *zsb, lr_write_t *lr, boolean_t byteswap)
zsb->z_replay_eof = eod;
}
- written = zpl_write_common(ZTOI(zp), data, length, offset,
+ written = zpl_write_common(ZTOI(zp), data, length, &offset,
UIO_SYSSPACE, 0, kcred);
if (written < 0)
error = -written;
diff --git a/module/zfs/zpl_file.c b/module/zfs/zpl_file.c
index d37cf07f9..5ea892320 100644
--- a/module/zfs/zpl_file.c
+++ b/module/zfs/zpl_file.c
@@ -115,6 +115,12 @@ zpl_fsync(struct file *filp, struct dentry *dentry, int datasync)
return (error);
}
+static int
+zpl_aio_fsync(struct kiocb *kiocb, int datasync)
+{
+ struct file *filp = kiocb->ki_filp;
+ return (zpl_fsync(filp, filp->f_path.dentry, datasync));
+}
#elif defined(HAVE_FSYNC_WITHOUT_DENTRY)
/*
* Linux 2.6.35 - 3.0 API,
@@ -137,6 +143,11 @@ zpl_fsync(struct file *filp, int datasync)
return (error);
}
+static int
+zpl_aio_fsync(struct kiocb *kiocb, int datasync)
+{
+ return (zpl_fsync(kiocb->ki_filp, datasync));
+}
#elif defined(HAVE_FSYNC_RANGE)
/*
* Linux 3.1 - 3.x API,
@@ -163,26 +174,30 @@ zpl_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
return (error);
}
+
+static int
+zpl_aio_fsync(struct kiocb *kiocb, int datasync)
+{
+ return (zpl_fsync(kiocb->ki_filp, kiocb->ki_pos,
+ kiocb->ki_pos + kiocb->ki_nbytes, datasync));
+}
#else
#error "Unsupported fops->fsync() implementation"
#endif
-ssize_t
-zpl_read_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
- uio_seg_t segment, int flags, cred_t *cr)
+static inline ssize_t
+zpl_read_common_iovec(struct inode *ip, const struct iovec *iovp, size_t count,
+ unsigned long nr_segs, loff_t *ppos, uio_seg_t segment,
+ int flags, cred_t *cr)
{
- int error;
ssize_t read;
- struct iovec iov;
uio_t uio;
+ int error;
- iov.iov_base = (void *)buf;
- iov.iov_len = len;
-
- uio.uio_iov = &iov;
- uio.uio_resid = len;
- uio.uio_iovcnt = 1;
- uio.uio_loffset = pos;
+ uio.uio_iov = (struct iovec *)iovp;
+ uio.uio_resid = count;
+ uio.uio_iovcnt = nr_segs;
+ uio.uio_loffset = *ppos;
uio.uio_limit = MAXOFFSET_T;
uio.uio_segflg = segment;
@@ -190,12 +205,26 @@ zpl_read_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
if (error < 0)
return (error);
- read = len - uio.uio_resid;
+ read = count - uio.uio_resid;
+ *ppos += read;
task_io_account_read(read);
return (read);
}
+inline ssize_t
+zpl_read_common(struct inode *ip, const char *buf, size_t len, loff_t *ppos,
+ uio_seg_t segment, int flags, cred_t *cr)
+{
+ struct iovec iov;
+
+ iov.iov_base = (void *)buf;
+ iov.iov_len = len;
+
+ return (zpl_read_common_iovec(ip, &iov, len, 1, ppos, segment,
+ flags, cr));
+}
+
static ssize_t
zpl_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos)
{
@@ -203,33 +232,50 @@ zpl_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos)
ssize_t read;
crhold(cr);
- read = zpl_read_common(filp->f_mapping->host, buf, len, *ppos,
+ read = zpl_read_common(filp->f_mapping->host, buf, len, ppos,
UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);
- if (read < 0)
- return (read);
+ return (read);
+}
+
+static ssize_t
+zpl_aio_read(struct kiocb *kiocb, const struct iovec *iovp,
+ unsigned long nr_segs, loff_t pos)
+{
+ cred_t *cr = CRED();
+ struct file *filp = kiocb->ki_filp;
+ size_t count = kiocb->ki_nbytes;
+ ssize_t read;
+ size_t alloc_size = sizeof (struct iovec) * nr_segs;
+ struct iovec *iov_tmp = kmem_alloc(alloc_size, KM_SLEEP);
+ bcopy(iovp, iov_tmp, alloc_size);
+
+ ASSERT(iovp);
+
+ crhold(cr);
+ read = zpl_read_common_iovec(filp->f_mapping->host, iov_tmp, count,
+ nr_segs, &kiocb->ki_pos, UIO_USERSPACE, filp->f_flags, cr);
+ crfree(cr);
+
+ kmem_free(iov_tmp, alloc_size);
- *ppos += read;
return (read);
}
-ssize_t
-zpl_write_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
- uio_seg_t segment, int flags, cred_t *cr)
+static inline ssize_t
+zpl_write_common_iovec(struct inode *ip, const struct iovec *iovp, size_t count,
+ unsigned long nr_segs, loff_t *ppos, uio_seg_t segment,
+ int flags, cred_t *cr)
{
- int error;
ssize_t wrote;
- struct iovec iov;
uio_t uio;
+ int error;
- iov.iov_base = (void *)buf;
- iov.iov_len = len;
-
- uio.uio_iov = &iov;
- uio.uio_resid = len,
- uio.uio_iovcnt = 1;
- uio.uio_loffset = pos;
+ uio.uio_iov = (struct iovec *)iovp;
+ uio.uio_resid = count;
+ uio.uio_iovcnt = nr_segs;
+ uio.uio_loffset = *ppos;
uio.uio_limit = MAXOFFSET_T;
uio.uio_segflg = segment;
@@ -237,11 +283,24 @@ zpl_write_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
if (error < 0)
return (error);
- wrote = len - uio.uio_resid;
+ wrote = count - uio.uio_resid;
+ *ppos += wrote;
task_io_account_write(wrote);
return (wrote);
}
+inline ssize_t
+zpl_write_common(struct inode *ip, const char *buf, size_t len, loff_t *ppos,
+ uio_seg_t segment, int flags, cred_t *cr)
+{
+ struct iovec iov;
+
+ iov.iov_base = (void *)buf;
+ iov.iov_len = len;
+
+ return (zpl_write_common_iovec(ip, &iov, len, 1, ppos, segment,
+ flags, cr));
+}
static ssize_t
zpl_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos)
@@ -250,14 +309,34 @@ zpl_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos)
ssize_t wrote;
crhold(cr);
- wrote = zpl_write_common(filp->f_mapping->host, buf, len, *ppos,
+ wrote = zpl_write_common(filp->f_mapping->host, buf, len, ppos,
UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);
- if (wrote < 0)
- return (wrote);
+ return (wrote);
+}
+
+static ssize_t
+zpl_aio_write(struct kiocb *kiocb, const struct iovec *iovp,
+ unsigned long nr_segs, loff_t pos)
+{
+ cred_t *cr = CRED();
+ struct file *filp = kiocb->ki_filp;
+ size_t count = kiocb->ki_nbytes;
+ ssize_t wrote;
+ size_t alloc_size = sizeof (struct iovec) * nr_segs;
+ struct iovec *iov_tmp = kmem_alloc(alloc_size, KM_SLEEP);
+ bcopy(iovp, iov_tmp, alloc_size);
+
+ ASSERT(iovp);
+
+ crhold(cr);
+ wrote = zpl_write_common_iovec(filp->f_mapping->host, iov_tmp, count,
+ nr_segs, &kiocb->ki_pos, UIO_USERSPACE, filp->f_flags, cr);
+ crfree(cr);
+
+ kmem_free(iov_tmp, alloc_size);
- *ppos += wrote;
return (wrote);
}
@@ -646,8 +725,11 @@ const struct file_operations zpl_file_operations = {
.llseek = zpl_llseek,
.read = zpl_read,
.write = zpl_write,
+ .aio_read = zpl_aio_read,
+ .aio_write = zpl_aio_write,
.mmap = zpl_mmap,
.fsync = zpl_fsync,
+ .aio_fsync = zpl_aio_fsync,
#ifdef HAVE_FILE_FALLOCATE
.fallocate = zpl_fallocate,
#endif /* HAVE_FILE_FALLOCATE */
diff --git a/module/zfs/zpl_xattr.c b/module/zfs/zpl_xattr.c
index 107b80300..526c3f9e6 100644
--- a/module/zfs/zpl_xattr.c
+++ b/module/zfs/zpl_xattr.c
@@ -239,6 +239,7 @@ zpl_xattr_get_dir(struct inode *ip, const char *name, void *value,
{
struct inode *dxip = NULL;
struct inode *xip = NULL;
+ loff_t pos = 0;
int error;
/* Lookup the xattr directory */
@@ -261,7 +262,7 @@ zpl_xattr_get_dir(struct inode *ip, const char *name, void *value,
goto out;
}
- error = zpl_read_common(xip, value, size, 0, UIO_SYSSPACE, 0, cr);
+ error = zpl_read_common(xip, value, size, &pos, UIO_SYSSPACE, 0, cr);
out:
if (xip)
iput(xip);
@@ -357,6 +358,7 @@ zpl_xattr_set_dir(struct inode *ip, const char *name, const void *value,
ssize_t wrote;
int lookup_flags, error;
const int xattr_mode = S_IFREG | 0644;
+ loff_t pos = 0;
/*
* Lookup the xattr directory. When we're adding an entry pass
@@ -407,7 +409,7 @@ zpl_xattr_set_dir(struct inode *ip, const char *name, const void *value,
if (error)
goto out;
- wrote = zpl_write_common(xip, value, size, 0, UIO_SYSSPACE, 0, cr);
+ wrote = zpl_write_common(xip, value, size, &pos, UIO_SYSSPACE, 0, cr);
if (wrote < 0)
error = wrote;