What's New in this Firmware Release V6.0
There are a number of new features in Version 6.0 of the PS Series Firmware.
Below is a list of new features:
GUI Enhancements
Version 6.0 introduces several new capabilities for the Group Manager graphical user interface (GUI).
Custom Session Banner
Enables the ability to specify a customized banner to be displayed before logging into Group Manager.
Update Checking and Notification
Upon login to Group Manager, the system checks to see if Dell has released new information about the array firmware or host software. If Dell has released new firmware or software since the last login, an
upgrade icon (green arrow over a blue semicircle) appears on the bottom toolbar of the Group Manager window. Clicking on the icon launches a window that displays information on the available updates.
Session Idle Timeout.
Administrator can specify idle time in minutes, with a minimum value of one minute, before group
logout of GUI, SSH, Telnet, console, and FTP idle sessions.
Improved Replication Screens
The replication GUI has been updated to more clearly display information about delegated space, inbound replication, outbound replication, and status of in-progress replication operations. In addition, the replication partner configuration process now allows for simultaneous configuration of volume and NAS container replication on the same partner.
Volume Folders
Volume Folders allows you to organize volumes into folders in the Group Manager GUI. This feature
can be used to group volumes according to department, function, or other criteria. Volume Folders are
an organizational tool only and do not affect the volumes that the folders contain.
Introduction of the PS-M4110
Support for the PS-M4110, a new storage blade array designed for use in an M1000e blade enclosure.
The PS-M4110 has the following features:
- Fourteen 2.5 inch drives, either SAS, Nearline SAS, or a combination of SSD and SAS drives.
- One or two Type 13 control modules running PS Series Firmware Version 6.0 or higher
- The PS-M4110 can operate only when installed in a Dell M1000e Blade Enclosure, through which the array receives power and connects with your network. The M1000e blade enclosure can support four PS-M4110 blades per enclosure.
- The PS-M4110 supports the same system configuration limits as the other PS4xxx array models
IPsec Support
Version 6.0 of the PS Series Firmware introduces support for IPsec protection for secure communications between group member arrays and also between iSCSI initiators and a IPsec support designed to comply
with the US Government USGv6 requirements. Use policies to configure your IPsec implementation to protect iSCSI traffic based on initiator IP address, a range of IP addresses, initiators on a specific subnet, or network protocol.
IPsec is supported only for PS Series array models PS6xxx and PS41x0, as well as the PS-M4110.
The IPsec feature is available only if all members of the group are arrays that support IPsec.
Group Manager GUI Available in Multiple Languages
The Group Manager GUI is available in multiple languages.
The language kits will be available shortly after the English version is released.
New Verify Option in MTU
Support for Version 1.2.3 of the Manual Transfer Utility (MTU) is now included. MTU Version 1.2.3 introduces a new Verify option that allows you to select when you start MTU. If you select the Verify option, as you transfer files, MTU verifies that the files are readable on the device to which they are transferred and that the data in the transferred files is identical to the data in the source volume.
RAID 5 and No-Spare Configurations Not Recommended
Beginning with this release, the Group Manager GUI no longer includes the option for configuring a group member to use RAID 5 for its RAID policy. RAID 5 carries higher risks of encountering an uncorrectable drive error during a rebuild, and therefore does not offer optimal data protection.
Consequently, Dell recommends against using RAID 5 for any business-critical data.
RAID 5 may still be required for certain applications, depending on performance and data availability requirements. To allow for these scenarios, you may still use the CLI to configure a RAID 5 Policy.
Additionally, Dell recommends against using RAID configurations that do not use spare drives.
Self-Encrypting Drives
A self-encrypting drive (SED) performs Advanced Encryption Standard (AES) encryption on all data stored within that drive. SED hardware handles this encryption in real-time with no impact on performance. To protect your company’s valuable data, an SED will immediately lock itself when it is removed from the array or is otherwise powered down. If the drive is lost or stolen, its contents are inaccessible without the encryption key.
Dell EqualLogic manages SEDs in a completely automatic fashion. There is no configuration or setup
required. Drives removed from an SED-secured array cannot be unlocked, unless the majority of the
drives (not including the spares) are compromised at the same moment.
Single Sign-On
Version 6.0 of the PS Series Firmware introduces support for single sign-on, which enables users
who have already logged into their PCs using Windows Active Directory (AD) credentials to log on to
the Group Manager GUI without having to re-specify their AD credentials. To use this feature, you must configure the group so that it is directed to the same AD domain that authenticated the users when they
logged on to their PCs.
Snapshot Space Borrowing
Version 6.0 of the PS Series Firmware introduces support for snapshot space borrowing, which enables temporarily increasing the available snapshot space for a volume by borrowing from the snapshot reserve
of other volumes and pool free space. Snapshot space borrowing helps prevent the oldest snapshots in yourcollection from being automatically deleted.
When space is needed for other functions, the firmware automatically deletes the oldest snapshots from
the volume that required the borrowed space. For example, when you create a new volume or when existing snapshots increase in size because of increased input and output to the volume.
Snapshot space borrowing is intended as a temporary solution for snapshots; it is not meant to solve
long-term free space issues or eliminate the need to provision snapshot reserve appropriately.
Support for PS6500ES, PS6510ES, and PS-M4110 with HDD/SSD Drives
The PS6500ES, PS6510ES, and PS-M4110 offer tiered configurations in which solid-state disks (SSD) and serial-attached SCSI (SAS) drives reside in the same chassis. These products are optimized to provide SSD level performance and hard disk drive (HDD) level capacity for tiered applications.
Only the RAID-6 Accelerated policy is supported on these array models.
RAID-6 Accelerated provides RAID 6 dual-parity protection, while optimizing the use of solid-state drives
for optimal performance.
In these systems, one hard disk drive is configured as a spare that provides redundancy protection in the
event of either an SSD or HDD failure. In the event of an SSD failure, the RAID set is reconstructed using the HDD spare. During this time, array performance is temporarily degraded. When the failed SSD is replaced,
the data is copied back to the new SSD, the HDD returns to its status as a spare, and performance returns
to optimal levels.
Although the SSDs can be installed in any of the slots in the chassis and function correctly, it is recommended that you install them in slots 0 through 6 in PS6500 and PS6510, and slots 0 through 4 in PS-M4110 arrays. Refer to the hardware maintenance documentation for these appliances for information about how to identify the slot numbers.
Maintaining an on-site spare for SSD drives shortens the period of performance degradation caused by
using the HDD spare in the SSD RAID set.
Synchronous Replication
Version 6.0 introduces Synchronous Replication (sometimes referred to as SyncRep). Synchronous Replication maintains two identical copies of a volume across two different storage pools in the same PS Series group. When Synchronous Replication is enabled for a volume, each write must go to both pools
before it is acknowledged as complete so that two identical copies of the volume data exist simultaneously
in two pools. This method is in contrast to non-Synchronous Replication configurations, in which volume
data is located only in the pool to which the volume is assigned.
Synchronous Replication ensures that a viable, hardware-independent copy of the volume is available in the event of a member failure. If the pool that is currently being used to access the volume becomes unavailable, the volume goes offline until the pool either becomes available again or the administrator manually fails over
the volume to the other pool.
Synchronous Replication also facilitates minimally-disruptive maintenance windows. If administrators need to take a group member offline, they can failover the Synchronous Replication volumes residing in the pool to which the member belongs; the volumes are unavailable to hosts only for the time required to perform the switch and for the volume to come back online.
Synchronous Replication requires a PS Series group with two or more member arrays that you have configured to contain two or more storage pools.
You cannot use Synchronous Replication at the same time that you are using standard replication
on the same volume (although you can use both kinds of replications on different volumes in
the same group).
Volume Undelete
Version 6.0 of the PS Series Firmware introduces support for volume undelete, which enables
the restoration of volumes deleted by mistake. To facilitate the restoration, the firmware places deleted volumes in a volume recovery bin, where they are preserved for up to one week, or until free space is required.
If a volume contains no data, the firmware does not move it to the volume recovery bin when you delete it.
The firmware automatically purges volumes in the volume recovery bin after one week, or when space is needed. You can restore and rename a restored volume or purge it manually. If there is insufficient space to restore a volume, the firmware might restore it as a thin volume.
The volume recovery bin is enabled by default. If you do not want to have volume data preserved in the
volume recovery bin when volumes are deleted, you can disable this feature through the CLI only, |
using the CLI command recovery-bin volume disable.
Some volume information is not restored when a volume is restored. This includes volume snapshots
and snapshot reserve, Synchronous Replication state, and RAID preferences.
Volume Unmapping
Version 6.0 of the PS Series firmware introduces support for using SCSI unmap operations to recover
unused space previously allocated to volumes.
As a host writes data to a volume, the array allocates additional space for that data. Prior to support of the volume unmapping feature, that space remained allocated to the volume, even if the data was deleted from
the volume and hosts were no longer using it. This created a "watermark" effect; the array could not de-allocate the space, and therefore the space was unavailable for allocation to other volumes.
With volume unmapping, the array can reclaim this unused space. When a host connected to a volume
issues a SCSI unmap operation, the array deletes the data and de-allocates the space, making it
available for allocation by other volumes.
Volume unmapping requires that all group members be running Version 6.0 or later of the PS Series firmware. Space de-allocation occurs only if you are using initiators that are capable of sending SCSI unmap operations and only on a "best effort" basis. Although space de-allocation occurs on both thin provisioned volumes and regular volumes, the volume reserve size can potentially shrink as a result of unmap operations only on thin provisioned volumes.
If space is de-allocated, that space might not be immediately available as free space for other volumes
until the array deletes snapshots of those volumes.
Unmap operations are supported in the following operating systems:
- VMware ESX 5.0
- Red Hat Enterprise Linux 6.0
- Windows 8/Windows Server 2012
Windows 8/Windows Server 2012 Support
Version 6.0 supports the following features related to the Microsoft® Windows® 8/Windows Server 2012 operating system.
Thirty-two node clusters – Version 6.0 has been qualified with up to 32 Windows Server 2012 cluster nodes.
Offloaded Data Transfers (ODX) – Version 6.0 supports the Offloaded Data Transfers (ODX) feature. ODX promotes rapid and efficient data transfer using intelligent storage arrays such as the PS Series storage arrays. ODX is automatically enabled in Windows Server 2012.
ODX maximizes the capabilities of your arrays by streamlining the movement of data between volumes, including data on virtual hard disks (VHDs), SMB/CIFS shares, and physical disks. ODX offloads the resource load of Windows 8 data transfers that use native methods (such as RoboCopy, copy/paste, and drag-and-drop) from server CPU and memory, as well as from the network, to the storage array itself. By off-loading the resource load to a high speed SAN, ODX optimizes throughput and reduces drain on Windows server resources.
For complete details see the: PS Series Storage Arrays: Release Notes Version 6.0.x