Jump to content
RESET Forums (homeservershow.com)

Highpoint RR2680SGL and then 2720 SGL issues


biddyman
 Share

Recommended Posts

Well, I’ve been lurking on here for a while and have often found the answer to my questions. However, I’ve tried things that I have read on here and what Highpoint has suggested. I’m going to try to write everything that I have done, but I know I will miss some things since I have been going at this since the end of March.

 

This is one of those times when you say to yourself, “Self don’t mess with something that isn’t broke.”

 

I have (minus case now) a HP DC7600 mid tower with WHS 2011 installed, Thecus 5200 as a file server, 3 X 3TB Hitachi HDs (HDS5C3030ALA630) and 5 x 1TB Seagate HDs (ST31000528AS). I thought I should consolidate since I really like WHS 2011. So, I bought:

 

Super Micro SC743T-500B case – it has 8 hot-swap bays – I modified it with some quieter fans

HighPoint RR2680SGL

HighPoint Cables

 

I put MB in case, installed RAID card, Installed drivers and webgui, updated RAID card BIOS. The only peculiar issue is that on a cold boot I get a PCI device failed to initialize. CTRL+Alt+Del fixes it. So, reboots work just fine. I didn’t think I could link this to a MB incompatibility issue after reading peoples actual MB and RAID card incompatibility issues.

 

I first installed the 3TB drives and created a RAID 5. It worked fine after initialization and I started copying files to it. I waited two days and added 1TB drives and created RAID 5. After another two days, I started copying data to 1TB drive RAID. Random drives started failing and RAIDs would be disabled. On reboot, they would be fine and sometimes they would rebuild. This went on for about a week.

 

I opted to remove 1TB drives and just work with 3TB drives. I saw on here about molex power connections coming loose from backplane. Connections were good and wires were tight in connector. 3TB drives continued with same issue. I also saw on here about RAID card heat issues and opened case and pointed a 4 inch fan at card. I still had issues. I then thought I would eliminate backplane. I took drives out and connected them to power and straight to cables. Same issue. At some point, I also read that the bracket could interfere with RAID card being seated properly. I took bracket off card and seated it, but that didn’t help.

 

I went to HighPoint for help. They wanted me to scan drives with some software they gave me. I scanned them and they had no bad sectors. They were concerned about power to drives and power supply not being big enough. I have a USB IDE/SATA drive adapter at home and work. I used the power supplies for two of the drives and the cases power supply for one of the drives. One of the drives would fail. I should add that it would fail when a task to copy files was kicked off. They turned to NCQ being an issue and wanted me to disable it. Fail.

 

They then stated that they didn’t have the resources to troubleshoot drive incompatibility issues. They would refund my money or RMA my card for a 2720SGL. I took the RMA.

 

The 2720SGL journey…

 

I installed 2720SGL, loaded the latest drivers, loaded WebGUI and updated the BIOS on RAID card. I then installed the 3TB drives and built array. All seemed to be working pretty good until copying small amount data (20 gigs) and it found a bad sector and was able to repair. Mind you, the WebGUI and other software that reads S.M.A.R.T. didn’t show that there were any bad sectors from the first time this happened to the last time it happened.

 

After doing some nightly backup task and getting a couple of bad sectors each night, I decided to go ahead and put in the 1TB drives. I started a nightly SyncToy task to copy between the two arrays. Event log showed bad sector from both size drives. They always were successful repairs after them.

 

So, I decided to copy bigger chunks of data (800GB) using Robocopy in separate sessions to each array. This brought on total disk failures and disabled drives. It was always random as I tried several times.

 

I then remembered that I was always in the habit of having drives spin down after 30 mins. This isn’t default, so I was hoping this was the oversight that would fix it all. I let the nightly task run and what do you know… No bad sectors. After about three days, I tried using Robocopy to move 800GB from USB drive to the 1TB drives. Random drive failures again.

 

Since I always tried the 3TB drives first, I took them out in case they were the cause. Nightly task worked fine, but as soon as I tried to move a lot of data I would get failed drives.

 

I really don’t know what to think at this point. I’ve eliminated the backplane, so I think that isn’t the issue. I have two different brands of drives, so I would think the chances of both being incompatible would be slim. Could it still be the motherboard?

Link to comment
Share on other sites

Definitively strange and not the typical experience. I have tried these in many different motherboards (but not Intel) with no issues. I assume you updated the System BIOS to the latest? What slot are you using for the RR card? I recently helped someone who had an issue with an older system. Turned out that slowing the buss speed on Highpoint card solved the problem. If you have updated your BIOS, put the card in the top slot, then you might try moving the jumper on the Highpoint card. Seems rare but it is worth a shot.

Link to comment
Share on other sites

It sounds a little like a hand-shaking issue, that large loads are overwhelming the control mechanism and causing the RR to think the drive(s) is/are bad. I would definitely check out pcdoc's suggestion about the bus speed.

Link to comment
Share on other sites

Definitively strange and not the typical experience. I have tried these in many different motherboards (but not Intel) with no issues. I assume you updated the System BIOS to the latest? What slot are you using for the RR card? I recently helped someone who had an issue with an older system. Turned out that slowing the buss speed on Highpoint card solved the problem. If you have updated your BIOS, put the card in the top slot, then you might try moving the jumper on the Highpoint card. Seems rare but it is worth a shot.

 

Just to clarify I now have the 2720. I'm not seeing a jumper for bus speed. I know there was one on the 2680.

 

I did try the system BIOS update with the 2680 and the system would not boot. It would cycle through power on and off. I would have to unplug and remove card.

 

So far with the 2720, I have started with two latest system BIOS versions and they power cycle and don't boot up.

 

The card is in the top slot.

 

Sounds like it just may be an issue with older MB.

Link to comment
Share on other sites

I was only able to upgrade system BIOS to 2 versions higher. Still get the PCI failed to initialize on cold boots, but as I mentioned before it works doing a Ctrl+Alt+Del.

 

Not sure on the bus speed since I don't have jumpers for it.

Link to comment
Share on other sites

I have the same issue as you biddyman, random drives started failing and RAID array is being disabled. I have to pull the powerplug and put it in again, before the status is back to normal.

Link to comment
Share on other sites

What kind of drives are you guys using. This issue smells of mixed drives, or some of the old gen green drives.

Link to comment
Share on other sites

What kind of drives are you guys using. This issue smells of mixed drives, or some of the old gen green drives.

 

Biddyman did specify the drive models in his post: 3 X 3TB Hitachi HDs (HDS5C3030ALA630) and 5 x 1TB Seagate HDs (ST31000528AS).

 

I agree this stinks of drive compatability issues.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...