something that people forget with raid1 is that this only protect from catastrophic disk failure.
this means your your drive need to be dead for raid to do it's protection and this is usually the case.
the problem is when starts corrupting data it reads of writes. in that case raid have no way to know that and can even corrupt data on the healthy drive. (data is read corrupted and then written to both drives)
the issue is that there are 2 copies of the data and raid have no way of telling with one is correct so it's basically flips a coin and select one of them, even if filesystem knows that content makes no sense.
that's basically biggest advantage of filesystems like zfs or btrfs that manage raid themselves, they have checksums and that know with copy is valid and are able to recover and say that one drive appears healthy but it's corrupting data so you probably want to replace it
Made that experience once ca. 2011. I hosted a Minecraft server ona box with raid1.
The "cool" part was that I ran a cronjob that rendered the map to a png file once and hour, and at some point a friend asked why there were holes in the map.
Back then, Minecraft stored every 16x16 chunk of the map in an individual gzipped file. When the raid1 decided to read the chunk from the bad drive, it couldn't unzip it. If that happened to the renderer, there was a hole on the map. If that happened to the game server, it would regenerate the chunk, and overwrite the old one on both drives, even the healthy one. Luckily as far a I remember that only happened on random terrain, otherwise someone would have ended up with half their house missing.
I was surprised when I found out my fancy RAID5 card was using DMA for all of its disk accesses instead of having its own memory. Just the dumbest design. I had a power issue fry the last memory stick in my machine so it would come up clean and then under load report disk corruption. The disk was fine. Memory was broken. I rebuilt that fucking array three times before I ran a memory analyzer on the box.
this means your your drive need to be dead for raid to do it's protection and this is usually the case.
the problem is when starts corrupting data it reads of writes. in that case raid have no way to know that and can even corrupt data on the healthy drive. (data is read corrupted and then written to both drives)
the issue is that there are 2 copies of the data and raid have no way of telling with one is correct so it's basically flips a coin and select one of them, even if filesystem knows that content makes no sense.
that's basically biggest advantage of filesystems like zfs or btrfs that manage raid themselves, they have checksums and that know with copy is valid and are able to recover and say that one drive appears healthy but it's corrupting data so you probably want to replace it