summaryrefslogtreecommitdiffstats
path: root/module
diff options
context:
space:
mode:
authorBrian Behlendorf <[email protected]>2011-07-19 11:48:46 -0700
committerBrian Behlendorf <[email protected]>2011-07-19 16:17:01 -0700
commitcfc9a5c88f91f7b4d606fce89505e1f404691ea5 (patch)
tree7f55d30f6a7ea15030d2237dc18067e902fa998e /module
parentabd39a8289c3bc2766f281933689d5140445aa85 (diff)
Fix zpl_writepage() deadlock
Disable the normal reclaim path for zpl_putpage(). This ensures that all memory allocations under this call path will never enter direct reclaim. If this were to happen the VM might try to write out additional pages by calling zpl_putpage() again resulting in a deadlock. This sitution is typically handled in Linux by marking each offending allocation GFP_NOFS. However, since much of the code used is common it makes more sense to use PF_MEMALLOC to flag the entire call tree. Alternately, the code could be updated to pass the needed allocation flags but that's a more invasive change. The following example of the above described deadlock was triggered by test 074 in the xfstest suite. Call Trace: [<ffffffff814dcdb2>] down_write+0x32/0x40 [<ffffffffa05af6e4>] dnode_new_blkid+0x94/0x2d0 [zfs] [<ffffffffa0597d66>] dbuf_dirty+0x556/0x750 [zfs] [<ffffffffa05987d1>] dmu_buf_will_dirty+0x81/0xd0 [zfs] [<ffffffffa059ee70>] dmu_write+0x90/0x170 [zfs] [<ffffffffa0611afe>] zfs_putpage+0x2ce/0x360 [zfs] [<ffffffffa062875e>] zpl_putpage+0x1e/0x60 [zfs] [<ffffffffa06287b2>] zpl_writepage+0x12/0x20 [zfs] [<ffffffff8115f907>] writeout+0xa7/0xd0 [<ffffffff8115fa6b>] move_to_new_page+0x13b/0x170 [<ffffffff8115fed4>] migrate_pages+0x434/0x4c0 [<ffffffff811559ab>] compact_zone+0x4fb/0x780 [<ffffffff81155ed1>] compact_zone_order+0xa1/0xe0 [<ffffffff8115602c>] try_to_compact_pages+0x11c/0x190 [<ffffffff811200bb>] __alloc_pages_nodemask+0x5eb/0x8b0 [<ffffffff8115464a>] alloc_pages_current+0xaa/0x110 [<ffffffff8111e36e>] __get_free_pages+0xe/0x50 [<ffffffffa03f0e2f>] kv_alloc+0x3f/0xb0 [spl] [<ffffffffa03f11d9>] spl_kmem_cache_alloc+0x339/0x660 [spl] [<ffffffffa05950b3>] dbuf_create+0x43/0x370 [zfs] [<ffffffffa0596fb1>] __dbuf_hold_impl+0x241/0x480 [zfs] [<ffffffffa0597276>] dbuf_hold_impl+0x86/0xc0 [zfs] [<ffffffffa05977ff>] dbuf_hold_level+0x1f/0x30 [zfs] [<ffffffffa05a9dde>] dmu_tx_check_ioerr+0x4e/0x110 [zfs] [<ffffffffa05aa1f9>] dmu_tx_count_write+0x359/0x6f0 [zfs] [<ffffffffa05aa5df>] dmu_tx_hold_write+0x4f/0x70 [zfs] [<ffffffffa0611a6d>] zfs_putpage+0x23d/0x360 [zfs] [<ffffffffa062875e>] zpl_putpage+0x1e/0x60 [zfs] [<ffffffff811221f9>] write_cache_pages+0x1c9/0x4a0 [<ffffffffa0628738>] zpl_writepages+0x18/0x20 [zfs] [<ffffffff81122521>] do_writepages+0x21/0x40 [<ffffffff8119bbbd>] writeback_single_inode+0xdd/0x2c0 [<ffffffff8119bfbe>] writeback_sb_inodes+0xce/0x180 [<ffffffff8119c11b>] writeback_inodes_wb+0xab/0x1b0 [<ffffffff8119c4bb>] wb_writeback+0x29b/0x3f0 [<ffffffff8119c6cb>] wb_do_writeback+0xbb/0x240 [<ffffffff811308ea>] bdi_forker_task+0x6a/0x310 [<ffffffff8108ddf6>] kthread+0x96/0xa0 Signed-off-by: Brian Behlendorf <[email protected]> Closes #327
Diffstat (limited to 'module')
-rw-r--r--module/zfs/zpl_file.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/module/zfs/zpl_file.c b/module/zfs/zpl_file.c
index fc2b81329..c2e3a6bdc 100644
--- a/module/zfs/zpl_file.c
+++ b/module/zfs/zpl_file.c
@@ -354,7 +354,16 @@ zpl_putpage(struct page *pp, struct writeback_control *wbc, void *data)
{
int error;
+ /*
+ * Disable the normal reclaim path for zpl_putpage(). This
+ * ensures that all memory allocations under this call path
+ * will never enter direct reclaim. If this were to happen
+ * the VM might try to write out additional pages by calling
+ * zpl_putpage() again resulting in a deadlock.
+ */
+ current->flags |= PF_MEMALLOC;
error = -zfs_putpage(pp, wbc, data);
+ current->flags &= ~PF_MEMALLOC;
if (error) {
SetPageError(pp);