I have reviewed the forum threads regarding RAID on x86 but I'd like an update. I'm looking for a simple way to implement RAID 1 in hardware, using SSD drives. I'd prefer to not hack in a full size PCI-e card. It seems like dual m.2 SSD drives would be ideal, but I'm not seeing any off-the-shelf solutions. I also have a limitation of 120GB for the drive (which is still very large for the application). (I'm still working on getting a version control system in place, but once I do I should be able to be more public about the project. The RAID system is part of getting a vcs like fossil up and running.)
Has anyone use a dual SSD m.2 enclosure: https://www.amazon.com/Dual-M-2-SAT...7532568&sr=1-3&keywords=DUal+m.2+SATA+Adapter
ASM1061 based (RAID) solution would be okayish. I am still searching. There is a Taiwanese mfg making specialty NGFF cards which should have a M.2 RAID card with multiple SATA cables leaving the board. Price can be high ($50 - $100), if memory serves.
Found it (hard part to get though) EGSS-32R1 https://www.innodisk.com/en/products/embedded-peripheral/disk-array/EGSS-32R1
That could be a good option. I went with https://www.amazon.com/gp/product/B018AOZ9QM/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1 and two https://www.amazon.com/gp/product/B07C3YMVBL/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1 for a RAID 1 solution ($135USD), using CN 18 (SATA). I will post how it goes. What would be nice would be a self-containd m.2 solution with two (removable) SSD cards.
It took a long time before I was able to implement this. The SSD I was using with the Udoo SBC was giving me error messages and I wanted to add better error handling in the driver, but ultimately the drive failed and I had to replace it with the RAID 1 system above. I can state that it does work, in theory, although practice is showing some issues. I have found that one a safe shutdown I have to wait several seconds before I physically turn off the Udoo, or the shutdown is not properly recorded. On the next boot a loss of synchronicity is detected and the backup writes over the most recent image. I've lost data this way. But the more odd thing is that I am finding an increase in file system errors. I am not entirely certain it is the RAID 1 system, but these errors are occurring after crashes/hard reboots (which are fairly common due to the experimental nature of what I am doing, i.e., unsupported OS, unsupported toolchain, etc.). As far as I can tell, the file system map is getting incorrect information about the status of blocks, and thinking they are available when they are not. This results in file contents being written into other files. Occasionally I get the reverse, where a file is suddenly filled with zeros. That is suggestive of an error in the file's block chain, which could come from the RAID 1 system. Has anyone here used RAID 1 and experienced any unusual file system errors?