GNU bug report logs - #38086
RAID installation script with ‘mdadm’ no longer works

Previous Next

Package: guix;

Reported by: Ludovic Courtès <ludo <at> gnu.org>

Date: Wed, 6 Nov 2019 10:14:02 UTC

Severity: important

Done: Ludovic Courtès <ludo <at> gnu.org>

Bug is archived. No further changes may be made.

To add a comment to this bug, you must first unarchive it, by sending
a message to control AT debbugs.gnu.org, with unarchive 38086 in the body.
You can then email your comments to 38086 AT debbugs.gnu.org in the normal way.

Toggle the display of automated, internal messages from the tracker.

View this report as an mbox folder, status mbox, maintainer mbox


Report forwarded to andreas <at> enge.fr, bug-guix <at> gnu.org:
bug#38086; Package guix. (Wed, 06 Nov 2019 10:14:02 GMT) Full text and rfc822 format available.

Acknowledgement sent to Ludovic Courtès <ludo <at> gnu.org>:
New bug report received and forwarded. Copy sent to andreas <at> enge.fr, bug-guix <at> gnu.org. (Wed, 06 Nov 2019 10:14:02 GMT) Full text and rfc822 format available.

Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Ludovic Courtès <ludo <at> gnu.org>
To: bug-Guix <at> gnu.org
Subject: RAID installation script with ‘mdadm’ no
 longer works
Date: Wed, 06 Nov 2019 11:13:13 +0100
Hello,

Looks like our RAID installation method no longer works, as can be seen
at <https://ci.guix.gnu.org/build/1906208/details>:

--8<---------------cut here---------------start------------->8---
+ guix --version
guix (GNU Guix) c4de60ac3c6aa5b46519011af89988215c347e9e
Copyright (C) 2019 the Guix authors
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
+ export GUIX_BUILD_OPTIONS=--no-grafts
+ GUIX_BUILD_OPTIONS=--no-grafts
+ parted --script /dev/vdb mklabel gpt mkpart primary ext2 1M 3M mkpart primary ext2 3M 600M mkpart primary ext2 600M 1200M set 1 boot on set 1 bios_grub on
+ mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2 /dev/vdb2 /dev/vdb3
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
[   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with default_layout setting
[   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
[   13.896000] md: pers->run() failed ...
mdadm: RUN_ARRAY failed: Unknown error 524
[   13.901603] md: md0 stopped.
--8<---------------cut here---------------end--------------->8---

Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?

We should then update (gnu tests install) and the manual accordingly.

Thanks,
Ludo’.




Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Wed, 06 Nov 2019 11:09:02 GMT) Full text and rfc822 format available.

Message #8 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Gábor Boskovits <boskovits <at> gmail.com>
To: Ludovic Courtès <ludo <at> gnu.org>
Cc: bug-Guix <at> gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Wed, 6 Nov 2019 12:07:39 +0100
[Message part 1 (text/plain, inline)]
Hello Ludo,


Ludovic Courtès <ludo <at> gnu.org> ezt írta (időpont: 2019. nov. 6., Sze,
11:14):

> Hello,
>
> Looks like our RAID installation method no longer works, as can be seen
> at <https://ci.guix.gnu.org/build/1906208/details>:
>
> --8<---------------cut here---------------start------------->8---
> + guix --version
> guix (GNU Guix) c4de60ac3c6aa5b46519011af89988215c347e9e
> Copyright (C) 2019 the Guix authors
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> + export GUIX_BUILD_OPTIONS=--no-grafts
> + GUIX_BUILD_OPTIONS=--no-grafts
> + parted --script /dev/vdb mklabel gpt mkpart primary ext2 1M 3M mkpart
> primary ext2 3M 600M mkpart primary ext2 600M 1200M set 1 boot on set 1
> bios_grub on
> + mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2
> /dev/vdb2 /dev/vdb3
> mdadm: chunk size defaults to 512K
> mdadm: Defaulting to version 1.2 metadata
> [   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with
> default_layout setting
> [   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
> [   13.896000] md: pers->run() failed ...
> mdadm: RUN_ARRAY failed: Unknown error 524
> [   13.901603] md: md0 stopped.
> --8<---------------cut here---------------end--------------->8---
>
> Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?
>

On kernel 5.3.4 and above the
raid0.default_layout=2 kernel boot paramter should be set. We should
generate our grub configuration accordingly.

See this for reference:
https://blog.icod.de/2019/10/10/caution-kernel-5-3-4-and-raid0-default_layout/



> We should then update (gnu tests install) and the manual accordingly.
>
> Thanks,
> Ludo’.
>
Best regards,
g_bor

-- 
OpenPGP Key Fingerprint: 7988:3B9F:7D6A:4DBF:3719:0367:2506:A96C:CF63:0B21
[Message part 2 (text/html, inline)]

Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Mon, 11 Nov 2019 23:30:02 GMT) Full text and rfc822 format available.

Message #11 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Ludovic Courtès <ludo <at> gnu.org>
To: Gábor Boskovits <boskovits <at> gmail.com>
Cc: 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Tue, 12 Nov 2019 00:28:58 +0100
[Message part 1 (text/plain, inline)]
Hi Gábor,

Gábor Boskovits <boskovits <at> gmail.com> skribis:

>> + mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2
>> /dev/vdb2 /dev/vdb3
>> mdadm: chunk size defaults to 512K
>> mdadm: Defaulting to version 1.2 metadata
>> [   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with
>> default_layout setting
>> [   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
>> [   13.896000] md: pers->run() failed ...
>> mdadm: RUN_ARRAY failed: Unknown error 524
>> [   13.901603] md: md0 stopped.
>> --8<---------------cut here---------------end--------------->8---
>>
>> Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?
>>
>
> On kernel 5.3.4 and above the
> raid0.default_layout=2 kernel boot paramter should be set. We should
> generate our grub configuration accordingly.

That’s part of the solution, thank you!

With the patch below, the “raid-root-os” test successfully installs the
system to a RAID0 device, but then that system fails to boot with:

--8<---------------cut here---------------start------------->8---
Booting from Hard Disk...
GRUB loading.
Welcome to GRUB!

error: invalid arch-independent ELF magic.
Entering rescue mode...
--8<---------------cut here---------------end--------------->8---

(It sits there forever.)

Are we missing something in ‘grub.cfg’?  If so, I wonder if that problem
arose with the upgrade in commit
069ab3bbfde704760acaca20dff8a29d167c6be5.

Thoughts?

Ludo’.

[Message part 2 (text/x-patch, inline)]
diff --git a/gnu/tests/install.scm b/gnu/tests/install.scm
index 22c9554705..5e421f7c54 100644
--- a/gnu/tests/install.scm
+++ b/gnu/tests/install.scm
@@ -543,7 +543,8 @@ where /gnu lives on a separate partition.")
     (bootloader (bootloader-configuration
                  (bootloader grub-bootloader)
                  (target "/dev/vdb")))
-    (kernel-arguments '("console=ttyS0"))
+    (kernel-arguments '("console=ttyS0"
+                        "raid0.default_layout=2"))
 
     ;; Add a kernel module for RAID-0 (aka. "stripe").
     (initrd-modules (cons "raid0" %base-initrd-modules))
@@ -578,9 +579,11 @@ export GUIX_BUILD_OPTIONS=--no-grafts
 parted --script /dev/vdb mklabel gpt \\
   mkpart primary ext2 1M 3M \\
   mkpart primary ext2 3M 600M \\
-  mkpart primary ext2 600M 1200M \\
+  mkpart primary ext2 600M 1.4G \\
   set 1 boot on \\
   set 1 bios_grub on
+modprobe raid0
+echo 1 > /sys/module/raid0/parameters/default_layout
 mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2 \\
   /dev/vdb2 /dev/vdb3
 mkfs.ext4 -L root-fs /dev/md0

Severity set to 'important' from 'normal' Request was from Ludovic Courtès <ludo <at> gnu.org> to control <at> debbugs.gnu.org. (Mon, 11 Nov 2019 23:30:02 GMT) Full text and rfc822 format available.

Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Fri, 22 Nov 2019 18:31:01 GMT) Full text and rfc822 format available.

Message #16 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Ludovic Courtès <ludo <at> gnu.org>
To: Gábor Boskovits <boskovits <at> gmail.com>
Cc: 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Fri, 22 Nov 2019 19:30:43 +0100
An update: this is the last known good test:

  https://berlin.guixsd.org/build/1793057/details

and this is the first known-bad (‘mdadm’ failing with “cannot assemble
multi-zone RAID0 without default_layout setting”):

  https://berlin.guixsd.org/build/1795351/details

We have to resort to an ungly hack to get the evaluation number and
corresponding commit of each build because they aren’t accessible over
HTTP (which is unfortunate!):

--8<---------------cut here---------------start------------->8---
sqlite> select * from builds where rowid = 1793057;
/gnu/store/618hm2w0clcrxz16yww846mgqdc1l4s0-raid-root-os.drv|7863|test.raid-root-os.i686-linux|i686-linux|raid-root-os||0|1570439988|1570459635|1570459744
sqlite> select * from checkouts where evaluation = 7863;
guix-master|7b6a8e23b0de18262a42e44432f955517d71d796|7863|guix|/gnu/store/7sd2lwj83n6kyn66p9bdgs5yvzqnl539-guix-7b6a8e2
sqlite> select * from builds where rowid = 1795351;
/gnu/store/qskl45gw9y9hd8qp7s5451d53pvpc60q-raid-root-os.drv|7867|test.raid-root-os.i686-linux|i686-linux|raid-root-os||2|1570440409|0|1570457622
sqlite> select * from checkouts where evaluation = 7867;
guix-master|7d82e920717f08bceb42bb570d786dff233171e1|7867|guix|/gnu/store/b2cq9zhdsz4qri2xkg3rgwmyri0wyxxb-guix-7d82e92
--8<---------------cut here---------------end--------------->8---

So the commit that introduced the change of behavior of ‘mdadm’ is:

--8<---------------cut here---------------start------------->8---
commit 7d82e920717f08bceb42bb570d786dff233171e1
Date:   Sun Oct 6 06:07:15 2019 +0000

    gnu: linux-libre: Update to 5.3.4.
--8<---------------cut here---------------end--------------->8---

And indeed that brings us back to:

  https://blog.icod.de/2019/10/10/caution-kernel-5-3-4-and-raid0-default_layout/

Hmm alright, nothing new here.  Oh well!

Ludo’.




Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Fri, 17 Jan 2020 22:44:01 GMT) Full text and rfc822 format available.

Message #19 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Vagrant Cascadian <vagrant <at> debian.org>
To: Ludovic Courtès <ludo <at> gnu.org>, Gábor
 Boskovits <boskovits <at> gmail.com>
Cc: 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Fri, 17 Jan 2020 14:42:53 -0800
[Message part 1 (text/plain, inline)]
On 2019-11-12, Ludovic Courtès wrote:
> Gábor Boskovits <boskovits <at> gmail.com> skribis:
>
>>> + mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2
>>> /dev/vdb2 /dev/vdb3
>>> mdadm: chunk size defaults to 512K
>>> mdadm: Defaulting to version 1.2 metadata
>>> [   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with
>>> default_layout setting
>>> [   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
>>> [   13.896000] md: pers->run() failed ...
>>> mdadm: RUN_ARRAY failed: Unknown error 524
>>> [   13.901603] md: md0 stopped.
>>> --8<---------------cut here---------------end--------------->8---
>>>
>>> Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?
>>>
>>
>> On kernel 5.3.4 and above the
>> raid0.default_layout=2 kernel boot paramter should be set. We should
>> generate our grub configuration accordingly.

So, this might be sort of a tangent, but I'm wondering why you're
testing raid0 (striping, for performance+capacity at risk of data loss)
instead of raid1 (mirroring, for redundancy, fast reads, slow writes,
half capacity of storage), or another raid level with more disks (raid5,
raid6, raid10). raid1 would be the simplest to switch the code to, since
it uses only two disks.


The issue triggering this bug might be a non-issue on other raid levels
that in my mind might make more sense for rootfs. Or maybe people have
use-casese for rootfs on raid0 that I'm too uncreative to think of? :)


live well,
  vagrant
[signature.asc (application/pgp-signature, inline)]

Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Sat, 18 Jan 2020 13:31:02 GMT) Full text and rfc822 format available.

Message #22 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Gábor Boskovits <boskovits <at> gmail.com>
To: Vagrant Cascadian <vagrant <at> debian.org>
Cc: Ludovic Courtès <ludo <at> gnu.org>, 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Sat, 18 Jan 2020 14:29:56 +0100
[Message part 1 (text/plain, inline)]
Vagrant Cascadian <vagrant <at> debian.org> ezt írta (időpont: 2020. jan. 17.,
Pén 23:42):

> On 2019-11-12, Ludovic Courtès wrote:
> > Gábor Boskovits <boskovits <at> gmail.com> skribis:
> >
> >>> + mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2
> >>> /dev/vdb2 /dev/vdb3
> >>> mdadm: chunk size defaults to 512K
> >>> mdadm: Defaulting to version 1.2 metadata
> >>> [   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with
> >>> default_layout setting
> >>> [   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
> >>> [   13.896000] md: pers->run() failed ...
> >>> mdadm: RUN_ARRAY failed: Unknown error 524
> >>> [   13.901603] md: md0 stopped.
> >>> --8<---------------cut here---------------end--------------->8---
> >>>
> >>> Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?
> >>>
> >>
> >> On kernel 5.3.4 and above the
> >> raid0.default_layout=2 kernel boot paramter should be set. We should
> >> generate our grub configuration accordingly.
>
> So, this might be sort of a tangent, but I'm wondering why you're
> testing raid0 (striping, for performance+capacity at risk of data loss)
> instead of raid1 (mirroring, for redundancy, fast reads, slow writes,
> half capacity of storage), or another raid level with more disks (raid5,
> raid6, raid10). raid1 would be the simplest to switch the code to, since
> it uses only two disks.
>
>
> The issue triggering this bug might be a non-issue on other raid levels
> that in my mind might make more sense for rootfs. Or maybe people have
> use-casese for rootfs on raid0 that I'm too uncreative to think of? :)
>

I often see raid 10 as root. I believe it might make sense to test that
setup.

>
>
> live well,
>   vagrant
>
[Message part 2 (text/html, inline)]

Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Sat, 18 Jan 2020 21:47:02 GMT) Full text and rfc822 format available.

Message #25 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Ludovic Courtès <ludo <at> gnu.org>
To: Vagrant Cascadian <vagrant <at> debian.org>
Cc: Gábor Boskovits <boskovits <at> gmail.com>,
 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Sat, 18 Jan 2020 22:46:48 +0100
[Message part 1 (text/plain, inline)]
Hi!

Vagrant Cascadian <vagrant <at> debian.org> skribis:

> So, this might be sort of a tangent, but I'm wondering why you're
> testing raid0 (striping, for performance+capacity at risk of data loss)
> instead of raid1 (mirroring, for redundancy, fast reads, slow writes,
> half capacity of storage), or another raid level with more disks (raid5,
> raid6, raid10). raid1 would be the simplest to switch the code to, since
> it uses only two disks.

Good point!  I guess it would make sense to test RAID1, indeed.

I gave it a shot with the patch below.  Problem is that installation
seemingly hangs here:

--8<---------------cut here---------------start------------->8---
+ parted --script /dev/vdb mklabel gpt mkpart primary ext2 1M 3M mkpart primary ext2 3M 1.4G mkpart primary ext2 1.4G 2.8G set 1 boot on set 1 bios_grub on
+ mdadm --create /dev/md0 --verbose --level=mirror --raid-devices=2 /dev/vdb2 /dev/vdb3
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 1361920K
mdadm: largest drive (/dev/vdb3) exceeds size (1361920K) by more than 1%
--8<---------------cut here---------------end--------------->8---

As you can see, it’s attempting to make a RAID1 device out of two
partitions (not two disks), which makes no sense in the real world, but
is easier to handle here.  So I wonder if this is what’s causing it to
hang…

Thoughts?

Ludo’.

[Message part 2 (text/x-patch, inline)]
diff --git a/gnu/tests/install.scm b/gnu/tests/install.scm
index 8842d48df8..12e6eb26df 100644
--- a/gnu/tests/install.scm
+++ b/gnu/tests/install.scm
@@ -546,8 +546,8 @@ where /gnu lives on a separate partition.")
                  (target "/dev/vdb")))
     (kernel-arguments '("console=ttyS0"))
 
-    ;; Add a kernel module for RAID-0 (aka. "stripe").
-    (initrd-modules (cons "raid0" %base-initrd-modules))
+    ;; Add a kernel module for RAID-1 (aka. "mirror").
+    (initrd-modules (cons "raid1" %base-initrd-modules))
 
     (mapped-devices (list (mapped-device
                            (source (list "/dev/vda2" "/dev/vda3"))
@@ -578,11 +578,11 @@ guix --version
 export GUIX_BUILD_OPTIONS=--no-grafts
 parted --script /dev/vdb mklabel gpt \\
   mkpart primary ext2 1M 3M \\
-  mkpart primary ext2 3M 600M \\
-  mkpart primary ext2 600M 1200M \\
+  mkpart primary ext2 3M 1.4G \\
+  mkpart primary ext2 1.4G 2.8G \\
   set 1 boot on \\
   set 1 bios_grub on
-mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2 \\
+mdadm --create /dev/md0 --verbose --level=mirror --raid-devices=2 \\
   /dev/vdb2 /dev/vdb3
 mkfs.ext4 -L root-fs /dev/md0
 mount /dev/md0 /mnt
@@ -605,7 +605,7 @@ by 'mdadm'.")
                                                %raid-root-os-source
                                                #:script
                                                %raid-root-installation-script
-                                               #:target-size (* 1300 MiB)))
+                                               #:target-size (* 2800 MiB)))
                          (command (qemu-command/writable-image image)))
       (run-basic-test %raid-root-os
                       `(,@command) "raid-root-os")))))

Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Sat, 18 Jan 2020 22:04:01 GMT) Full text and rfc822 format available.

Message #28 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Tobias Geerinckx-Rice <me <at> tobias.gr>
To: Ludovic Courtès <ludo <at> gnu.org>
Cc: Vagrant Cascadian <vagrant <at> debian.org>, 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Sat, 18 Jan 2020 23:03:13 +0100
[Message part 1 (text/plain, inline)]
Ludovic Courtès 写道:
> As you can see, it’s attempting to make a RAID1 device out of 
> two
> partitions (not two disks), which makes no sense in the real 
> world, but
> is easier to handle here.  So I wonder if this is what’s causing 
> it to
> hang…

It's just waiting for input:

 $ # dd & losetup magic, where loop0 is 20% larger than loop1
 $ sudo mdadm --create /dev/md0 --verbose --level=mirror 
 --raid-devices=2 /dev/loop{0,1}
 mdadm: Note: this array has metadata at the start and
   may not be suitable as a boot device.  If you plan to
   store '/boot' on this device please ensure that
   your boot-loader understands md/v1.x metadata, or use
   --metadata=0.90
 mdadm: size set to 101376K
 mdadm: largest drive (/dev/loop1) exceeds size (101376K) by more 
 than 1%
 Continue creating array?

Adding --force does not avoid this.

I recommend tweaking the partition table to make both members 
equal, but a ‘yes|’ also works if you're in a hurry ;-)

Kind regards,

T G-R
[signature.asc (application/pgp-signature, inline)]

Reply sent to Ludovic Courtès <ludo <at> gnu.org>:
You have taken responsibility. (Sun, 19 Jan 2020 22:14:02 GMT) Full text and rfc822 format available.

Notification sent to Ludovic Courtès <ludo <at> gnu.org>:
bug acknowledged by developer. (Sun, 19 Jan 2020 22:14:02 GMT) Full text and rfc822 format available.

Message #33 received at 38086-done <at> debbugs.gnu.org (full text, mbox):

From: Ludovic Courtès <ludo <at> gnu.org>
To: Tobias Geerinckx-Rice <me <at> tobias.gr>
Cc: Vagrant Cascadian <vagrant <at> debian.org>, 38086-done <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Sun, 19 Jan 2020 23:13:32 +0100
Hi Tobias!

Tobias Geerinckx-Rice <me <at> tobias.gr> skribis:

> It's just waiting for input:
>
>  $ # dd & losetup magic, where loop0 is 20% larger than loop1
>  $ sudo mdadm --create /dev/md0 --verbose --level=mirror
> --raid-devices=2 /dev/loop{0,1}
>  mdadm: Note: this array has metadata at the start and
>    may not be suitable as a boot device.  If you plan to
>    store '/boot' on this device please ensure that
>    your boot-loader understands md/v1.x metadata, or use
>    --metadata=0.90
>  mdadm: size set to 101376K
>  mdadm: largest drive (/dev/loop1) exceeds size (101376K) by more than
> 1%
>  Continue creating array?

D’oh, I hadn’t seen that message.

> Adding --force does not avoid this.
>
> I recommend tweaking the partition table to make both members equal,
> but a ‘yes|’ also works if you're in a hurry ;-)

“yes|” works like a charm, I went this that.

Pushed in commit 3adf320e44e54017a67f219ce9667a379c393dad, thank you!

Ludo’.




Information forwarded to bug-guix <at> gnu.org:
bug#38086; Package guix. (Sun, 19 Jan 2020 22:32:02 GMT) Full text and rfc822 format available.

Message #36 received at 38086 <at> debbugs.gnu.org (full text, mbox):

From: Tobias Geerinckx-Rice <me <at> tobias.gr>
To: Ludovic Courtès <ludo <at> gnu.org>
Cc: 38086 <at> debbugs.gnu.org
Subject: Re: bug#38086: RAID installation script with ‘mdadm’ no longer works
Date: Sun, 19 Jan 2020 23:31:41 +0100
[Message part 1 (text/plain, inline)]
Ludo',

Ludovic Courtès 写道:
>>  Continue creating array?
>
> D’oh, I hadn’t seen that message.

I doubt it was there for you to see in Guix's output.  Things not 
ending with a newline tend to get lost easily due to 
line-buffering.  Probably not worth worrying about.

> “yes|” works like a charm, I went this that.

‘Beautiful’,

T G-R
[signature.asc (application/pgp-signature, inline)]

bug archived. Request was from Debbugs Internal Request <help-debbugs <at> gnu.org> to internal_control <at> debbugs.gnu.org. (Mon, 17 Feb 2020 12:24:06 GMT) Full text and rfc822 format available.

This bug report was last modified 4 years and 62 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.