Because of hurricane Matthew, our business shutdown all servers for 2 times.

Because of hurricane Matthew, our business shutdown all servers for 2 times.

Because of hurricane Matthew, our business shutdown all servers for 2 times.

One of several servers had been an ESXi host by having a connected HP StorageWorks MSA60.

lesbian dating lines

As soon as we logged in to the vSphere customer, we https://datingmentor.org/escort/waterbury/ realized that none of your visitor VMs can be found (they’re all listed as “inaccessible”). So when we glance at the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, however the drives all reveal up as “unconfigured disk”.

We rebooted the server and attempted going in to the RAID config energy to see just what things seem like after that, but we received the following message:

“An invalid drive motion had been reported during POST. Adjustments into the array setup after an invalid drive motion can lead to lack of old setup information and articles for the initial rational drives”.

Of course, we are extremely confused by this because absolutely absolutely nothing had been “moved”; absolutely absolutely absolutely nothing changed. We simply driven within the MSA additionally the host, and have now been having this presssing problem from the time.

I’ve two questions/concerns that are main

The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the risk of this taking place once more (especially it) since I have no idea what caused.

Will there be a snowball’s possibility in hell that i could recover our array and guest VMs, alternatively of experiencing to reconstruct every thing and restore our VM backups?

I’ve two primary questions/concerns:

  1. The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the option to reconstruct the array and commence over, but i am leery in regards to the likelihood of this taking place once again (especially since I have actually have no clue just what caused it).

A variety of things. Can you schedule reboots on all of your equipment? Or even you should really just for this explanation. The main one host we’ve, XS decided the array was not prepared over time and did not install the storage that is main on boot. Constantly good to understand these plain things in advance, right?

  1. Will there be a snowball’s possibility in hell that i could recover our guest and array VMs, alternatively of getting to reconstruct every thing and restore our VM backups?

Perhaps, but i have never ever seen that one error. We are speaking really experience that is limited. Depending on which RAID controller it really is linked to the MSA, you may be in a position to see the array information through the drive on Linux utilizing the md utilities, but at that true point it is faster simply to restore from backups.

A variety of things. Would you schedule reboots on all of your gear? Or even you want to just for this reason. The main one host we’ve, XS decided the array was not prepared over time and did not install the primary storage amount on boot. Constantly good to learn these things in advance, right?

We really rebooted this host times that are multiple a month ago once I installed updates upon it. The reboots went fine. We additionally completely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing right right right back on, the raid and server array information had been all intact.

A variety of things. Do you really schedule reboots on all of your gear? Or even you want to for only this explanation. The main one host we’ve, XS decided the array was not ready with time and did not install the storage that is main on boot. Constantly good to understand these things in advance, right?

I really rebooted this host numerous times about a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing right right back on, the raid and server array information had been all intact.

Does your normal reboot routine of the host include a reboot associated with the MSA? would it be they had been driven straight back on within the wrong purchase? MSAs are notoriously flaky, likely this is where the presssing problem is.

I would phone HPE help. The MSA is really an unit that is flaky HPE help is very good.

I really rebooted this host times that are multiple a month ago once I installed updates on it. The reboots went fine. We also entirely driven that server down at round the time that is same I added more RAM to it. Once more, after powering every thing straight straight back on, the server and raid array information was all intact.

Does your normal reboot routine of the host come with a reboot associated with MSA? would it be which they had been driven straight back on within the wrong order? MSAs are notoriously flaky, likely that’s where the presssing problem is.

We’d phone HPE help. The MSA is a flaky unit but HPE help is very good.

We regrettably don’t possess a “normal reboot routine” for almost any of our servers :-/.

I am not really certain exactly just what the proper purchase is :-S. I would personally assume that the MSA would get powered on first, then your ESXi host. Should this be proper, we now have currently tried doing that since we first discovered this matter today, in addition to problem continues to be :(.

We do not have help agreement with this host or the connected MSA, and they are most likely solution of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), thus I’m uncertain simply how much we would need to invest to get HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates about it. The reboots went fine. We additionally completely driven that server down at across the exact same time because I added more RAM to it. Once more, after powering everything straight straight right back on, the raid and server array information had been all intact.