June 22, 2024

Antminer S9 board - NAND memory -- read using U-Boot

I pulled out board "D" from my box of Antminer S9 boards. This board is "factory fresh", i.e. I have done nothing with it since buying it from AliExpress.

I connect a serial console, network and power. I run picocom, apply power, then type something to interrupt the boot and get the U-Boot prompt:

Hit any key to stop autoboot:  0
zynq-uboot>
My basic idea is to read NAND into a memory buffer (in DDR ram), then use tftp commands to write that buffer to my linux desktop.

The NAND chip is a Micron 29F2G08ABAEA. This is a 2Gbit NAND flash set up at 256M by 8. So the size in bytes is 268435456 bytes.

We have 512M of ram (from 0 to 0x1fff_ffff). It is address aliased to 0x2000_0000. The Zynq allows for 1G of ram (from 0 to 0x3fff_ffff), but if we have less, it gets "duplicated" (address aliased) to higher addresses. I have only one board with a full 1G of ram.

The U-Boot bdinfo command shows the U-Boot reloc address to be 0x1eff_0000 and the stack at 0x1eb3_ff28 -- so U-Boot has placed itself at the top of memory and we should be free to use lower addresses.

As an experiment I do this:

mw.l 0 0xdeadbeef 0x4000000
And indeed, this fills memory from 0 to 0x0fff_ffff -- and U-boot keeps running.
Next we do this:
zynq-uboot> nand read 0 0
NAND read: device 0 whole chip
size adjusted to 0xff80000 (4 bad blocks)
 267911168 bytes read: OK

Network stuff

I already have a tftp server running on my linux machine, but it seems unwilling to do more than offer files. I must add a "-c" option to the tftp server to allow it to create files. I need to edit "/usr/lib/systemd/system/tftp.service".
Then I give thse commands:
cd /usr/lib/systemd/system
vi tftp.service
--- ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
systemctl daemon-reload
systemctl restart tftp
cd /var/lib
chown 777 tftpboot
I did need to open up permissions on my tftpboot directory. The reason why becomes apparent once I get a transfer to work. The file is written owned by "nobody".

I try sending a short portion of what I really want to send:

set ipaddr 192.168.0.144
tftpput 0 512 192.168.0.5:nand
It turns out that 512 gets interpreted as hex (so this is 1298 bytes)
TFTP to server 192.168.0.5; our IP address is 192.168.0.144
Filename 'nand'.
Save address: 0x0
Save size:    0x512
Saving: *
	 421.9 KiB/s
done
Bytes transferred = 1298 (512 hex)
The file shows up as follows:
-rw-r--r-- 1 nobody nobody    1298 Jun 22 11:20 nand
Now for the real thing:
tftpput 0 10000000 192.168.0.5:nand
It tries over and over, giving the message:
Retry count exceeded; starting again

Smaller pieces

Let's see if we can send 64M rather than 256M
tftpput 0 4000000 192.168.0.5:nand01
tftpput 0x4000000 4000000 192.168.0.5:nand02
tftpput 0x8000000 4000000 192.168.0.5:nand03
tftpput 0xc000000 4000000 192.168.0.5:nand04
I send each segment twice and check for a match. Segment 1 is fine, but segment 2 fails. I send segment 2 a third time -- so I get A, B, and C. B and C match, so I call that good, and reject A. Segments 3 and 4 match fine when I fetch them twice, so I call them good.

Sending 64M at a time works great, what about 128M? This also fails with the retry count business.

Then I put the 4 pieces together with:

cat nand01 nand02 nand03 nand04 >nand
ls -l nand*
-rw-r--r-- 1 tom    tom    268435456 Jun 22 16:17 nand
-rw-r--r-- 1 nobody nobody  67108864 Jun 22 11:51 nand01
-rw-r--r-- 1 nobody nobody  67108864 Jun 22 16:09 nand02
-rw-r--r-- 1 nobody nobody  67108864 Jun 22 11:55 nand03
-rw-r--r-- 1 nobody nobody  67108864 Jun 22 11:55 nand04
After some study, I learn that the NAND is partitioned into 3 pieces. The second two and linux filesystems, and not of particular interest right now (and more easily examined by booting linux). The first partition is 32M in size and is dedicated to U-Boot and the FSBL and such, so I do this:
dd if=nand of=nand.bin bs=1M count=32
rm nand
This saves some disk space on my linux system, but perhaps more importantly, when I do a hex dump of the entire file, my editor will start up much more quickly on the 134M file that results.
Feedback? Questions? Drop me a line!

Tom's Computer Info / [email protected]