Quantcast

xen dom0 nfs hangs, oom killer and more

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

xen dom0 nfs hangs, oom killer and more

Mike-3
Hello,

     I have had long term issues with xen dom0 and NFS / CIFS which
essentially make it impossible to do any reasonable amount of network
based filesystem operations from dom0 without the risk of excessive
blocking or hanging of dom0 processes with messages such as "blocked for
more than 120 seconds". I last posted about this problem in 2015 and
documented some very extensive troubleshooting on this problem
(https://lists.gt.net/xen/users/381469). I have never found a
resolution, across generations of hardware, OS installs, networks and
more, the penalty simply seems that if you attempt to use NFS / CIFS
from xen dom0, you can expect io performance issues akin to dialup
speeds, and hanging / blocked processes starved for I/O that can only be
resolved with a reboot of the box.

     So here it is in 2017 and I have taken yet another crack at all
this. I now have yet again, new network, new servers, new os installs,
new switches, the works. Not only can you still not use NFS/CIFS in dom0
without the hangs, new symptoms arise now with OOM killer starting up
and killing seemingly random processes in dom0 all the while with many
many gigabytes of free ram, no obvious explanation. I am not even sure
really how to best document this problem but it's like one of those
emperor has no clothes things, nfs/cifs is unsafe and brings down xen
dom0 no special effort required other than simply trying to use these,
end of story.

     I use NFS and CIFS in non-xen setups (bare metal servers) and find
both to be reasonably reliable and well performing and for nearly the
same exact workloads that would be under xen dom0 (backups - to create
compressed tar files of virtual machine image files for example), so I'm
a little confident that linux and these filesystems together are
reasonably stable. What I'd like to know, is if there is anyone else who
uses nfs/cifs from dom0 and are you successful and what your setup looks
like?

     I plan on setting up a test soon to smoke this out further. Going
to have a host set up where I can boot xen or non-xen and run the same
operations and see if I can show definitely this shows up only under xen
dom0, and then maybe get a clearer picture of why.
Thank you.


_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: xen dom0 nfs hangs, oom killer and more

Dario Faggioli-2
On Sun, 2017-02-12 at 23:39 -0800, Mike wrote:
> Hello,
>
Hi,

>      I have had long term issues with xen dom0 and NFS / CIFS which 
> essentially make it impossible to do any reasonable amount of
> network 
> based filesystem operations from dom0 without the risk of excessive 
> blocking or hanging of dom0 processes with messages such as "blocked
> for 
> more than 120 seconds". I last posted about this problem in 2015 and 
> documented some very extensive troubleshooting on this problem 
> (https://lists.gt.net/xen/users/381469). I have never found a 
> resolution, across generations of hardware, OS installs, networks
> and 
> more, the penalty simply seems that if you attempt to use NFS / CIFS 
> from xen dom0, you can expect io performance issues akin to dialup 
> speeds, and hanging / blocked processes starved for I/O that can only
> be 
> resolved with a reboot of the box.
>
Ok. I do have a testbox that mounts stuff from NFS in dom0. It works
for me, but I use it (I mean, the NFS part) in a very limited way, I
have to admit.

>      So here it is in 2017 and I have taken yet another crack at all 
> this. I now have yet again, new network, new servers, new os
> installs, 
> new switches, the works. Not only can you still not use NFS/CIFS in
> dom0 
> without the hangs, new symptoms arise now with OOM killer starting
> up 
> and killing seemingly random processes in dom0 all the while with
> many 
> many gigabytes of free ram, no obvious explanation.
>
Right. I understand this can be ver frustrating. :-(

> I am not even sure 
> really how to best document this problem but it's like one of those 
> emperor has no clothes things, nfs/cifs is unsafe and brings down
> xen 
> dom0 no special effort required other than simply trying to use
> these, 
> end of story.
>
Well, sure. But at the same time, without information it's hard
(impossible) to figure out what's happening and get to the bottom of
the issue.

So, as a starting point, full output of `xl dmesg' and `dmesg' if the
first thing I'd ask to see. Even more useful they will be actual info
of a crash. So, for instance, what OOMk says, whether Xen prints
anything on the serial console before locking/dying, etc.

>      I plan on setting up a test soon to smoke this out further.
> Going 
> to have a host set up where I can boot xen or non-xen and run the
> same 
> operations and see if I can show definitely this shows up only under
> xen 
> dom0, and then maybe get a clearer picture of why.
>
Actually, yes. If you can trigger and reproduce the bug, and provide as
much logs as possible of the exploded system, that would hopefully be a
useful starting point for a diagnosis. :-)

Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: xen dom0 nfs hangs, oom killer and more

G.R.
Sorry that I didn't read your mail carefully, so forgive me if what I
mentioned does not make sense to you.
I may have hit similar issue before.
While I could not remember all the details, the impression is that in
my case this has something to do with kernel compilation config.
You should be avoid aggressive kernel preemption config (e.g realtime
desktop / desktop). Using the 'server' config works fine for me.
With preemption enabled, the deadlock situation happens when system
memory is low and NFS or netback asks for more memory while VMM tries
to relinquish some page from NFS or netback. This by no means will be
exact / precise description since it's all from my unreliable memory.

But anyway, its something you could have a check in this direction.


On Mon, Feb 20, 2017 at 8:27 PM, Dario Faggioli
<[hidden email]> wrote:

> On Sun, 2017-02-12 at 23:39 -0800, Mike wrote:
>> Hello,
>>
> Hi,
>
>>      I have had long term issues with xen dom0 and NFS / CIFS which
>> essentially make it impossible to do any reasonable amount of
>> network
>> based filesystem operations from dom0 without the risk of excessive
>> blocking or hanging of dom0 processes with messages such as "blocked
>> for
>> more than 120 seconds". I last posted about this problem in 2015 and
>> documented some very extensive troubleshooting on this problem
>> (https://lists.gt.net/xen/users/381469). I have never found a
>> resolution, across generations of hardware, OS installs, networks
>> and
>> more, the penalty simply seems that if you attempt to use NFS / CIFS
>> from xen dom0, you can expect io performance issues akin to dialup
>> speeds, and hanging / blocked processes starved for I/O that can only
>> be
>> resolved with a reboot of the box.
>>
> Ok. I do have a testbox that mounts stuff from NFS in dom0. It works
> for me, but I use it (I mean, the NFS part) in a very limited way, I
> have to admit.
>
>>      So here it is in 2017 and I have taken yet another crack at all
>> this. I now have yet again, new network, new servers, new os
>> installs,
>> new switches, the works. Not only can you still not use NFS/CIFS in
>> dom0
>> without the hangs, new symptoms arise now with OOM killer starting
>> up
>> and killing seemingly random processes in dom0 all the while with
>> many
>> many gigabytes of free ram, no obvious explanation.
>>
> Right. I understand this can be ver frustrating. :-(
>
>> I am not even sure
>> really how to best document this problem but it's like one of those
>> emperor has no clothes things, nfs/cifs is unsafe and brings down
>> xen
>> dom0 no special effort required other than simply trying to use
>> these,
>> end of story.
>>
> Well, sure. But at the same time, without information it's hard
> (impossible) to figure out what's happening and get to the bottom of
> the issue.
>
> So, as a starting point, full output of `xl dmesg' and `dmesg' if the
> first thing I'd ask to see. Even more useful they will be actual info
> of a crash. So, for instance, what OOMk says, whether Xen prints
> anything on the serial console before locking/dying, etc.
>
>>      I plan on setting up a test soon to smoke this out further.
>> Going
>> to have a host set up where I can boot xen or non-xen and run the
>> same
>> operations and see if I can show definitely this shows up only under
>> xen
>> dom0, and then maybe get a clearer picture of why.
>>
> Actually, yes. If you can trigger and reproduce the bug, and provide as
> much logs as possible of the exploded system, that would hopefully be a
> useful starting point for a diagnosis. :-)
>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> _______________________________________________
> Xen-users mailing list
> [hidden email]
> https://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: xen dom0 nfs hangs, oom killer and more

G.R.
Managed to dig up some background thread for my case:
http://comments.gmane.org/gmane.comp.emulators.xen.devel/155734

On Tue, Feb 21, 2017 at 11:40 AM, G.R. <[hidden email]> wrote:

> Sorry that I didn't read your mail carefully, so forgive me if what I
> mentioned does not make sense to you.
> I may have hit similar issue before.
> While I could not remember all the details, the impression is that in
> my case this has something to do with kernel compilation config.
> You should be avoid aggressive kernel preemption config (e.g realtime
> desktop / desktop). Using the 'server' config works fine for me.
> With preemption enabled, the deadlock situation happens when system
> memory is low and NFS or netback asks for more memory while VMM tries
> to relinquish some page from NFS or netback. This by no means will be
> exact / precise description since it's all from my unreliable memory.
>
> But anyway, its something you could have a check in this direction.
>
>
> On Mon, Feb 20, 2017 at 8:27 PM, Dario Faggioli
> <[hidden email]> wrote:
>> On Sun, 2017-02-12 at 23:39 -0800, Mike wrote:
>>> Hello,
>>>
>> Hi,
>>
>>>      I have had long term issues with xen dom0 and NFS / CIFS which
>>> essentially make it impossible to do any reasonable amount of
>>> network
>>> based filesystem operations from dom0 without the risk of excessive
>>> blocking or hanging of dom0 processes with messages such as "blocked
>>> for
>>> more than 120 seconds". I last posted about this problem in 2015 and
>>> documented some very extensive troubleshooting on this problem
>>> (https://lists.gt.net/xen/users/381469). I have never found a
>>> resolution, across generations of hardware, OS installs, networks
>>> and
>>> more, the penalty simply seems that if you attempt to use NFS / CIFS
>>> from xen dom0, you can expect io performance issues akin to dialup
>>> speeds, and hanging / blocked processes starved for I/O that can only
>>> be
>>> resolved with a reboot of the box.
>>>
>> Ok. I do have a testbox that mounts stuff from NFS in dom0. It works
>> for me, but I use it (I mean, the NFS part) in a very limited way, I
>> have to admit.
>>
>>>      So here it is in 2017 and I have taken yet another crack at all
>>> this. I now have yet again, new network, new servers, new os
>>> installs,
>>> new switches, the works. Not only can you still not use NFS/CIFS in
>>> dom0
>>> without the hangs, new symptoms arise now with OOM killer starting
>>> up
>>> and killing seemingly random processes in dom0 all the while with
>>> many
>>> many gigabytes of free ram, no obvious explanation.
>>>
>> Right. I understand this can be ver frustrating. :-(
>>
>>> I am not even sure
>>> really how to best document this problem but it's like one of those
>>> emperor has no clothes things, nfs/cifs is unsafe and brings down
>>> xen
>>> dom0 no special effort required other than simply trying to use
>>> these,
>>> end of story.
>>>
>> Well, sure. But at the same time, without information it's hard
>> (impossible) to figure out what's happening and get to the bottom of
>> the issue.
>>
>> So, as a starting point, full output of `xl dmesg' and `dmesg' if the
>> first thing I'd ask to see. Even more useful they will be actual info
>> of a crash. So, for instance, what OOMk says, whether Xen prints
>> anything on the serial console before locking/dying, etc.
>>
>>>      I plan on setting up a test soon to smoke this out further.
>>> Going
>>> to have a host set up where I can boot xen or non-xen and run the
>>> same
>>> operations and see if I can show definitely this shows up only under
>>> xen
>>> dom0, and then maybe get a clearer picture of why.
>>>
>> Actually, yes. If you can trigger and reproduce the bug, and provide as
>> much logs as possible of the exploded system, that would hopefully be a
>> useful starting point for a diagnosis. :-)
>>
>> Regards,
>> Dario
>> --
>> <<This happens because I choose it to happen!>> (Raistlin Majere)
>> -----------------------------------------------------------------
>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>> _______________________________________________
>> Xen-users mailing list
>> [hidden email]
>> https://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: xen dom0 nfs hangs, oom killer and more

Dario Faggioli-2
In reply to this post by G.R.
On Tue, 2017-02-21 at 11:40 +0800, G.R. wrote:
> Sorry that I didn't read your mail carefully, so forgive me if what I
> mentioned does not make sense to you.
>
Err, sorry, what mail? I guess it's Mike's mail, rather than mine, that
you are talking about?

Please, try to avoid top posting, and use proper quoting instead.

I know it may sound annoying/nitpicking, but it actually does matter
for making the communication really effective, and avoiding
misunderstandings and situations like this one here (where I honestly
can't tell what you are talking about and to whom). :-)

Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: xen dom0 nfs hangs, oom killer and more

Mike-3
In reply to this post by Mike-3

>     I plan on setting up a test soon to smoke this out further. Going
> to have a host set up where I can boot xen or non-xen and run the same
> operations and see if I can show definitely this shows up only under
> xen dom0, and then maybe get a clearer picture of why.
> Thank you.
I ran a job that did a backup of some local file systems using tar and
parallel gzip compression to an nfs mounted directory. All was
fine...for a while... and then once again, the system falls down. I have
default instal ubuntu and have tried all sorts of tweaks, kernels,
sysctl variables, and nfs is simply death under xen no matter what,
across generations of hardware, os installs from at lease ubuntu 12
forward, switches, networks, this cancer of nfs death never goes away,
never changes, is trivial to trigger (just try using it), and I am still
at a complete and total loss. Is there anyone who sucessfully uses a
network filesystem of any kind under xen dom0 and if so what is your
experience?

[ 2682.994745] INFO: task pigz:11030 blocked for more than 120 seconds.
[ 2682.994783]       Tainted: G        W       4.10.0-041000-generic
#201702191831
[ 2682.994806] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 2682.994832] pigz            D    0 11030  11028 0x00000000
[ 2682.994838] Call Trace:
[ 2682.994853]  __schedule+0x233/0x6f0
[ 2682.994857]  ? bit_wait+0x60/0x60
[ 2682.994859]  schedule+0x36/0x80
[ 2682.994863]  schedule_timeout+0x22a/0x3f0
[ 2682.994869]  ? node_dirty_ok+0x12c/0x170
[ 2682.994872]  ? get_page_from_freelist+0x27c/0xb20
[ 2682.994877]  ? ___slab_alloc+0x3a0/0x4b0
[ 2682.994884]  ? ktime_get+0x41/0xb0
[ 2682.994886]  ? bit_wait+0x60/0x60
[ 2682.994888]  io_schedule_timeout+0xa4/0x110
[ 2682.994892]  ? _raw_spin_unlock_irqrestore+0x1a/0x20
[ 2682.994894]  bit_wait_io+0x1b/0x60
[ 2682.994896]  __wait_on_bit+0x58/0x90
[ 2682.994898]  ? bit_wait+0x60/0x60
[ 2682.994900]  out_of_line_wait_on_bit+0x82/0xb0
[ 2682.994907]  ? autoremove_wake_function+0x40/0x40
[ 2682.994937]  nfs_wait_on_request+0x37/0x40 [nfs]
[ 2682.994952]  nfs_writepage_setup+0xd1/0x6f0 [nfs]
[ 2682.994965]  nfs_updatepage+0x107/0x3a0 [nfs]
[ 2682.994971]  ? __check_object_size+0x100/0x1d7
[ 2682.994983]  nfs_write_end+0xf8/0x570 [nfs]
[ 2682.994990]  generic_perform_write+0x10f/0x1c0
[ 2682.995002]  nfs_file_write+0xdc/0x220 [nfs]
[ 2682.995006]  __vfs_write+0xe5/0x160
[ 2682.995010]  vfs_write+0xb5/0x1a0
[ 2682.995012]  SyS_write+0x55/0xc0
[ 2682.995016]  entry_SYSCALL_64_fastpath+0x1e/0xad
[ 2682.995019] RIP: 0033:0x7f9ad7f994bd
[ 2682.995021] RSP: 002b:00007f9ad769be50 EFLAGS: 00000293 ORIG_RAX:
0000000000000001
[ 2682.995024] RAX: ffffffffffffffda RBX: 000000000062f500 RCX:
00007f9ad7f994bd
[ 2682.995025] RDX: 0000000000018635 RSI: 00007f9ad8304010 RDI:
0000000000000001
[ 2682.995027] RBP: 0000000001a89c00 R08: 0000000000000000 R09:
0000000000000000
[ 2682.995028] R10: 0000000000000000 R11: 0000000000000293 R12:
0000000000000000
[ 2682.995030] R13: 0000000000020000 R14: 00007f9a9c6ffa98 R15:
00007f9a84042040
[ 2803.829527] INFO: task pigz:11030 blocked for more than 120 seconds.
[ 2803.829556]       Tainted: G        W       4.10.0-041000-generic
#201702191831
[ 2803.829577] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 2803.829600] pigz            D    0 11030  11028 0x00000000
[ 2803.829604] Call Trace:
[ 2803.829613]  __schedule+0x233/0x6f0
[ 2803.829615]  ? bit_wait+0x60/0x60
[ 2803.829616]  schedule+0x36/0x80
[ 2803.829618]  schedule_timeout+0x22a/0x3f0
[ 2803.829622]  ? node_dirty_ok+0x12c/0x170
[ 2803.829624]  ? get_page_from_freelist+0x27c/0xb20
[ 2803.829627]  ? ___slab_alloc+0x3a0/0x4b0
[ 2803.829631]  ? ktime_get+0x41/0xb0
[ 2803.829632]  ? bit_wait+0x60/0x60
[ 2803.829633]  io_schedule_timeout+0xa4/0x110
[ 2803.829635]  ? _raw_spin_unlock_irqrestore+0x1a/0x20
[ 2803.829637]  bit_wait_io+0x1b/0x60
[ 2803.829638]  __wait_on_bit+0x58/0x90
[ 2803.829639]  ? bit_wait+0x60/0x60
[ 2803.829641]  out_of_line_wait_on_bit+0x82/0xb0
[ 2803.829644]  ? autoremove_wake_function+0x40/0x40
[ 2803.829663]  nfs_wait_on_request+0x37/0x40 [nfs]
[ 2803.829672]  nfs_writepage_setup+0xd1/0x6f0 [nfs]
[ 2803.829680]  nfs_updatepage+0x107/0x3a0 [nfs]
[ 2803.829683]  ? __check_object_size+0x100/0x1d7
[ 2803.829691]  nfs_write_end+0xf8/0x570 [nfs]
[ 2803.829695]  generic_perform_write+0x10f/0x1c0
[ 2803.829702]  nfs_file_write+0xdc/0x220 [nfs]
[ 2803.829705]  __vfs_write+0xe5/0x160
[ 2803.829707]  vfs_write+0xb5/0x1a0
[ 2803.829708]  SyS_write+0x55/0xc0
[ 2803.829711]  entry_SYSCALL_64_fastpath+0x1e/0xad
[ 2803.829712] RIP: 0033:0x7f9ad7f994bd
[ 2803.829713] RSP: 002b:00007f9ad769be50 EFLAGS: 00000293 ORIG_RAX:
0000000000000001
[ 2803.829715] RAX: ffffffffffffffda RBX: 000000000062f500 RCX:
00007f9ad7f994bd
[ 2803.829716] RDX: 0000000000018635 RSI: 00007f9ad8304010 RDI:
0000000000000001
[ 2803.829717] RBP: 0000000001a89c00 R08: 0000000000000000 R09:
0000000000000000
[ 2803.829718] R10: 0000000000000000 R11: 0000000000000293 R12:
0000000000000000
[ 2803.829719] R13: 0000000000020000 R14: 00007f9a9c6ffa98 R15:
00007f9a84042040
[ 4374.681798] INFO: task pigz:11030 blocked for more than 120 seconds.
[ 4374.681829]       Tainted: G        W       4.10.0-041000-generic
#201702191831
[ 4374.681850] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 4374.681874] pigz            D    0 11030  11028 0x00000000
[ 4374.681878] Call Trace:
[ 4374.681887]  __schedule+0x233/0x6f0
[ 4374.681889]  ? bit_wait+0x60/0x60
[ 4374.681891]  schedule+0x36/0x80
[ 4374.681893]  schedule_timeout+0x22a/0x3f0
[ 4374.681896]  ? node_dirty_ok+0x12c/0x170
[ 4374.681900]  ? update_load_avg+0x6b/0x510
[ 4374.681903]  ? ktime_get+0x41/0xb0
[ 4374.681905]  ? bit_wait+0x60/0x60
[ 4374.681906]  io_schedule_timeout+0xa4/0x110
[ 4374.681908]  ? _raw_spin_unlock_irqrestore+0x1a/0x20
[ 4374.681909]  bit_wait_io+0x1b/0x60
[ 4374.681911]  __wait_on_bit+0x58/0x90
[ 4374.681912]  ? bit_wait+0x60/0x60
[ 4374.681913]  out_of_line_wait_on_bit+0x82/0xb0
[ 4374.681916]  ? autoremove_wake_function+0x40/0x40
[ 4374.681934]  nfs_wait_on_request+0x37/0x40 [nfs]
[ 4374.681943]  nfs_writepage_setup+0xd1/0x6f0 [nfs]
[ 4374.681951]  nfs_updatepage+0x107/0x3a0 [nfs]
[ 4374.681954]  ? __check_object_size+0x100/0x1d7
[ 4374.681962]  nfs_write_end+0xf8/0x570 [nfs]
[ 4374.681966]  generic_perform_write+0x10f/0x1c0
[ 4374.681977]  nfs_file_write+0xdc/0x220 [nfs]
[ 4374.681980]  __vfs_write+0xe5/0x160
[ 4374.681983]  vfs_write+0xb5/0x1a0
[ 4374.681986]  SyS_write+0x55/0xc0
[ 4374.681989]  entry_SYSCALL_64_fastpath+0x1e/0xad
[ 4374.681991] RIP: 0033:0x7f9ad7f994bd
[ 4374.681993] RSP: 002b:00007f9ad769be50 EFLAGS: 00000293 ORIG_RAX:
0000000000000001
[ 4374.681995] RAX: ffffffffffffffda RBX: 0000000000000001 RCX:
00007f9ad7f994bd
[ 4374.681997] RDX: 000000000000024f RSI: 00007f9abc1de010 RDI:
0000000000000001
[ 4374.681998] RBP: 0000000001a684f0 R08: 0000000000000000 R09:
0000000000000000
[ 4374.682000] R10: 0000000000000000 R11: 0000000000000293 R12:
000000000000024f
[ 4374.682001] R13: 0000000000020000 R14: 00007f9ad414c010 R15:
00007f9ab40420f0
[ 5051.805211] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)
[ 5446.073199] EXT4-fs (dm-7): 1 orphan inode deleted
[ 5446.073203] EXT4-fs (dm-7): recovery complete
[ 5446.082484] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)



_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Loading...