Making Your Own NAS With WiTi Board

August 19, 2015 , , , , ,

witi_nas

If you want to find a low power, scalable Mini server for NAS storage server, the WiTi board can be an option. WiTi is a board powered by Mediatek MT7621A dual core processor @ 880MHz, but with 4 Gigabit Ethernet LAN ports, 2 Gigabit WAN ports, 802.11 b/g/n and 802.11ac WiFi with up to 4 external antennas, two SATA ports, as well as a USB 3.0 port. With the features, we can turn the WiTi to a extensible NAS(Network Attached Storage). This post is going to show you how to make it.

What We Need?

  • WiTi Board    x1
  • SATA Cable    x2
  • 3.5” 1TB HDD    x2
  • Ethernet Cable    x2
  • USB-2-Serial Cable    x1
  • 12V@4A Power Supply    x1
  • PC (Ubuntu 14.04)    x1

Getting Souce Code

WiTi runs open source OpenWRT. The git repositories are hosted by github.com. We can download the source code with following commands

$mkdir -pv $your_own_place/WiTi
$cd  $your_own_place/WiTi
$git clone https://github.com/mqmaker/witi-openwrt.git openwrt

Customizing & Compiling

WiTi has not RAID Card inside, so we need md driver & mdadm tool to support software RAID array.  One of the benefit of using software RAID is that, you will never worry how to buy a specified RAID Card many years later. It’s quite simple  to support software RAID on WiTi platform

$cd openwrt
$make menuconfig

  • Add mdadm

mdadm

 

  • Add md/dm-mod

We need to enable kmod-md-mod & mdadm & kmod-dm as shown above.

After that, our customized OpenWRT image will support software RAID.  The whole RAID array size may be too big for us, we can split the whole RAID array to different logical volume with LVM2 tools. The following commands help you to install LVM2, iostat, vmstat, top, samba, openssh applications which are useful for our NAS server.

$./scripts/feeds update -a
$./scripts/feeds install lvm2

$./scripts/feeds install procps procps-top dstat sysstat
$./scripts/feeds install openssh-server
$./scripts/feeds install luci-app-samba

And don’t forget to enable those packages in Ncurses GUI! It’s time to start a long time compiling process.

 $make V=s -j4

Finally, we get our customized OpenWrt image!

$tree -L 1 bin/ramips/
bin/ramips/
├── md5sums
├── openwrt-ramips-mt7621-root.squashfs
├── openwrt-ramips-mt7621-uImage.bin
├── openwrt-ramips-mt7621-vmlinux.bin
├── openwrt-ramips-mt7621-vmlinux.elf
├── openwrt-ramips-mt7621-witi-squashfs-sysupgrade.bin
└── packages

 Writing Image To WiTi

We can write the image to WiTi board via TLL(USB-TO-SERIAL) cable. Here we update the firmware image via web page

  • Goto system page

 

  • Chose image to update

The page is quite straightforward. It’s esay for us to update new firmware. After writing our image into WiTi,  the system will reboot automatically. And we can hack into the WiTi OS now!

 

We’ve taken a small video showing that the peak current is 3.57A@12V with two 3.5” HDD when WiTi starts up.

Creating RAID Array

For ssh access to WiTi, we need to set a root passwd via serial console(We can set the root password by web page GUI also).

#passwd root
#(change-your-root-passwd-here)

Now it’s time to create a level-1 RAID array on the two HDD. The following content is the operation commands and the output log.

root@Witi:/# mdadm -Cv /dev/md0 -l1 -n2 /dev/sda1 /dev/sdb1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store ‘/boot’ on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    –metadata=0.90
mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=972964864K  mtime=Sun Aug 16 16:04:16 2015
mdadm: size set to 52395904K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
[  223.224000] md: bind<sda1>
[  223.228000] md: bind<sdb1>
[  223.236000] md/raid1:md0: not clean — starting background reconstruction
[  223.248000] md/raid1:md0: active with 2 out of 2 mirrors
[  223.260000] md0: detected capacity change from 0 to 53653405696
mdadm: array [  223.272000] md: resync of RAID array md0
/dev/md0 started[  223.284000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[  223.296000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
.
[  223.316000] md: using 128k window, over a total of 52395904k.
root@Witi:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
      52395904 blocks super 1.2 [2/2] [UU]
      [>………………..]  resync =  3.0% (1579264/52395904) finish=9.6min speed=87736K/sec
      
unused devices: <none>
root@Witi:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
      52395904 blocks super 1.2 [2/2] [UU]
      [>………………..]  resync =  3.8% (1993472/52395904) finish=9.6min speed=86672K/sec
      
unused devices: <none>
root@Witi:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
      52395904 blocks super 1.2 [2/2] [UU]
      [>………………..]  resync =  4.2% (2246912/52395904) finish=9.6min speed=86419K/sec
      
unused devices: <none>

We can see that, the RAID array has been created and the syncing IO speed is ~86MB/s. It’s the performance without any IO performance tuning.

Creating Logical Volume

Here we create a logical volume on the RAID array, and make ext4 filesystem on it.

root@Witi:/# lvm pvcreate  /dev/md0
  Physical volume “/dev/md0″ successfully created

root@Witi:/# lvm vgcreate myvg0 /dev/md0
  Volume group “myvg0″ successfully created
root@Witi:/# lvm lvcreate -n lv0 -L30G myvg0
[  747.104000] bio: create slab <bio-1> at 1
  Logical volume “lv0″ created

Before making filesystem on the volume, we do some dd test

#dd if=/dev/zero of=/dev/myvg0/lv0 bs=1M

the iostat output:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.17    0.00   47.87    1.00    0.00   50.96

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             135.67         0.00     69329.33          0     207988
sdb             134.00         0.00     68476.00          0     205428
md0           16384.00         0.00     65536.00          0     196608

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   48.17    1.33    0.00   50.50

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             134.00         0.00     68608.00          0     205824
sdb             134.67         0.00     68949.33          0     206848
md0           17408.00         0.00     69632.00          0     208896

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   48.25    1.16    0.00   50.58

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             136.33         0.00     69802.67          0     209408
sdb             132.33         0.00     67754.67          0     203264
md0           17066.67         0.00     68266.67          0     204800

#dd if=/dev/myvg0/lv0 of=/dev/null bs=1M

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.17    0.00   28.27    0.17    0.00   71.39

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             587.00     75136.00         0.00     225408          0
sdb               0.00         0.00         0.00          0          0
md0           18773.33     75093.33         0.00     225280          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   28.39    0.08    0.00   71.52

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             591.33     75690.67         0.00     227072          0
sdb               0.00         0.00         0.00          0          0
md0           18922.67     75690.67         0.00     227072          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   28.75    0.08    0.00   70.92

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
mtdblock0         0.00         0.00         0.00          0          0
mtdblock1         0.00         0.00         0.00          0          0
mtdblock2         0.00         0.00         0.00          0          0
mtdblock3         0.00         0.00         0.00          0          0
mtdblock4         0.00         0.00         0.00          0          0
mtdblock5         0.00         0.00         0.00          0          0
mtdblock6         0.00         0.00         0.00          0          0
sda             590.00     75520.00         0.00     226560          0
sdb               0.00         0.00         0.00          0          0
md0           18880.00     75520.00         0.00     226560          0
root@Witi:/# mkfs.ext4 /dev/myvg0/lv0
mke2fs 1.42.4 (12-June-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks

#lvm
#lvm> lvs
  LV   VG    Attr       LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  lv0  myvg0 -wi-a—– 30.00g                                             
  lv1  myvg0 -wi-a—– 15.00g                                             
#lvm> pvs
  PV         VG    Fmt  Attr PSize  PFree
  /dev/md0   myvg0 lvm2 a–  49.96g 4.96g
#lvm> vgs
  VG    #PV #LV #SN Attr   VSize  VFree
  myvg0   1   2   0 wz–n- 49.96g 4.96g
#lvm> quit
  Exiting.

Setting Up Samba Service

First we need to add root user to smb

#smbpasswd -a root

and mount the volume

#mount /dev/myvg0/lv0 /mnt/lv0

Now we add a share folder via web page

  • Login the web page

  • Goto Network Shares

 

  • Add a new share

 

  • Don’t forget to enable root user

add_share_3

 

Performance Test

  •  Samba Write Test

16MB-20MB@samba

  • Iperf test(Wired Ethernet)

nat_enable

 

witi_iperf_test_1

 

 

 

(to be continued)