5

Installing RocketRaid 2760a drivers on Ubuntu 12.10

Posted November 8th, 2012 (Updated 30 Dec 2012) in Linux

So I don’t forget, here’s a tutorial on installing the RocketRaid 2760a drivers and management utilities on Ubuntu 12.10. I’m using 64-bit but 32-bit should be the same – just substitute where appropriate.

Installing the Driver

The installation disc comes with an out of date version of the driver and kernel module. The way the installation works, we need to download and compile the kernel module then download the driver installer, drop the module into place and run the installer. Make sure you have appropriate compiler tools installed – to be safe just run

sudo apt-get install ubuntu-dev-tools

Download and extract the ones with kernel 3.x support from the HighPoint website:

mkdir ~/driver
cd ~/driver
# driver
wget http://www.highpoint-tech.com/BIOS_Driver/RR276x/linux/Ubuntu/rr276x-ubuntu-11.10-x86_64-v1.1.12.0502.tgz
tar -xvzf rr276x-ubuntu-11.10-x86_64-v1.1.12.0502.tgz
# kernel module
mkdir module
cd module
wget http://www.highpoint-tech.com/BIOS_Driver/RR276x/linux/RR276x-Linux-Src-v1.1-120424-1734.tar.gz
tar -xvzf  RR276x-Linux-Src-v1.1-120424-1734.tar.gz

Build and compress the kernel module for our driver installer:

cd rr276x-linux-src-v1.1/product/rr276x/linux/
make
# Ignore the warning about not being able to find him_rr2760x.o...doesn't seem to matter
gzip rr276x.ko
mv rr276x.ko.gz ~/driver/boot/rr276x$(uname -r)$(uname -m).ko.gz

And finally install the driver

cd ~/driver
sudo bash preinst.sh

This step succeeded! Now you can press ALT+F1 to switch back to the installation screen!

sudo bash install.sh

Update initrd file /boot/initrd.img-3.5.0-17-generic for 3.5.0-17-generic
Please reboot the system to use the new driver module.

sudo shutdown -r now

That should be everything! You can now test with

cat /proc/scsi/rr276x/*

 

Installing the RAID Management Software

Annoyingly the RAID management console is difficult to install to say the least. The GUI deb packages error when trying to install and HighPoint don’t even provide a deb for the command line version and to top it off the version of the command line utility on the driver CD is newer than the version on their site! For this reason I’ve provided the newer command line RPMs here.

Let’s get this thing installed.

Web version:

mkdir ~/driver/utility
mkdir ~/driver/utility/console
mkdir ~/driver/utility/web
cd ~/driver/utility/web
echo "rr276x" | sudo tee -a /etc/hptcfg > /dev/null
wget http://www.highpoint-tech.com/BIOS_Driver/GUI/linux/WebGui/WebGUI-Linux-v2.1-120419.tgz
tar -xzvf WebGUI-Linux-v2.1-120419.tgz
sudo apt-get -y install alien
sudo alien -d hptsvr-https-2.1-12.0419.$(uname -m).rpm
sudo dpkg -i hptsvr-https_2.1-13.0419_amd64.deb

 

Command Line version:

cd ~/driver/utility/console
wget http://www.flynsarmy.com/wp-content/uploads/2012/11/LinuxRaidUtilityConsole.tar.gz
sudo alien -d hptraidconf-3.5-1.$(uname -m).rpm
sudo alien -d hptsvr-3.13-7.$(uname -m).rpm
sudo dpkg -i hptraidconf_3.5-2_amd64.deb

Run the web server:

sudo hptsvr

You should now be able to connect to it from your browser by navigating to http://localhost:7402 with username RAID and password hpt.

 

Further Reading

HOWTO: Get GUI hptsvr & hptraid 3.13 working on Ubuntu
HighPoint driver download page for RocketRaid 2760a

  • Bryn

    Did you ever try using the card with just the Linux native mvsas driver, instead of Highpoint’s driver?

    From my research it appears that the Marvell SAS controller chips on this card are just straight HBAs. The only other chip is a PCI-Express switch. There isn’t an XOR engine for RAID calculations or anything on the card itself, so the Highpoint driver is just doing software RAID5 anyhow.

    From my perspective a safer approach would be to use the mvsas driver which just treats the card as an HBA and presents the individual drives to the OS. Then use Linux native RAID instead – that way if the card ever fails or whatever you aren’t locked in to whatever format HighPoint has used.

    I have one of these cards en route – will be interested to see if my suspicions are correct.

    • Flynsarmy

      It’d be great if you wrote a tutorial or blog post with your findings. I’m by no means knowledgable enough in either software or hardware aspects to answer your question – this is actually my first attempt at a RAID machine. In fact I haven’t even got RAID working yet as I was planning on doing the same as you and using linux RAID so if the card dies I’m not completely screwed :)

      One other thing to note about the above tutorial is that for some reason the LED Flash feature doesn’t seem to work. It was one of the features I was most looking forward to using as it’d help me locate exactly which drive was which. Very frustrating.

      • Bryn

        Will do!! I was planning on documenting my build anyhow. I’ll post back once I get it done – still waiting for a few bits and pieces to arrive.

        On my side I’m going to be running ZFS on Linux (ZoL) rather than MD-RAID but I’ve done plenty of MD-RAID stuff in the past. I probably have enough disks around to at least do a couple speed comparisons or whatever between the two – would be a decent build validation stage either way.

  • Bryn

    Started my system build yesterday… I am running Ubuntu 12.04 server with the backported kernel from 12.10, so same kernel version as you.

    I didn’t even bother with the Highpoint drivers and utilities. I haven’t configured anything in the Highpoint BIOS itself either – just plug and go. All the drives appeared fine so by default Ubuntu will talk to the actual Marvell SAS controllers on the Highpoint board using the ‘mvsas’ driver without doing anything at all, no need for the Highpoint stuff.

    I’m shuffling data around between disks right now and I’m getting very good speeds – over 200MB/second moving from a 5 disk raidz2 array to a 6 disk raidz2 array. The “source” array is the bottleneck as it will only have the performance of 3 drives. My drives are all 2TB “green” disks so ~65MB/second read per disk is totally in line with expectations. Meanwhile I have a badblocks check running on a 2TB WD Red drive which is writing away at about 120MB/sec, no issues.

    Last night I had a copy going from a 4-disk raidz array to a 3-disk striped array (just moving some data out of the way) – I was getting around 150MB/second on that while I was running a scrub on the 5-disk raidz2 array (at about 160MB/sec) and then badblocks scans on 3 2TB WD Red drives. The badblocks scans dropped to around 80 MB/sec while all the rest of that was going on, but that would have a total throughput on the card of:

    - 240 MB/sec for the badblocks scans
    - 160 MB/sec for the scrub
    - 150 MB/sec reading from the raidz array
    - 150 MB/sec writing to the striped array

    …so about 700MB/sec of data slinging around. CPU was pretty much at 100% (I have a 2.13GHz Core2 Duo from about 2008). A faster CPU would probably increase the throughput but I’m really not sure what the actual limitations of my mainboard are. It is an Asus workstation board also from about 2008.

    I have an 8×1.5TB WD Green array that was built using the Linux built in RAID (MD) which I will probably start copying data off of tomorrow. It is a RAID6 array.

    Conclusion? Absolutely no need to use the Highpoint drivers or utilities. Don’t even bother with them at all. There is no hardware acceleration for RAID in the 2760A anyhow, so all you are getting is a Highpoint-only toolset for creating arrays that can then only be used on other Highpoint cards. Performance won’t be any better (and in fact may well be worse) using the Highpoint stuff, and you loose a ton of flexibility.

    My preferred way of storing data with this card is ZFS on Linux:

    http://zfsonlinux.org/

    It has been working very very well for me. I’ve run ZFS on Linux for about a year now. While I did have some crashing issues earlier on, it has done very well dealing with some flaky hardware I was running previously. My old setup used a bunch of eSATA enclosures with SATA port multipliers. Terrible idea. A single drive acting up causes absolute havoc. ZFS dealt with it just fine though. I did have some cases where the array got knocked offline (after hardware issues) and the machine needed a reboot but I didn’t ever loose any data or suffer through any excruciatingly long rebuilds or anything.

    If you don’t want the added complexity of ZFS (it is under a different open source licence so it is not part of the kernel unless you install it), Linux MD + LVM are a totally fine way to go. In fact there are some things you can do with Linux MD (like add individual disks to existing arrays) that you can’t do otherwise. I have run Linux MD arrays for about a decade and they’ve always done me well.

    • Flynsarmy

      Thanks for the great writeup, Bryn. Interesting stuff!