Samsung SM863 480GB SATA Enterprise SSD Review

 

 

ADVERTISEMENT

Review:
Samsung SM863 Enterprise SSD

Reviewed
by:
J.Reynolds

ADVERTISEMENT

Provided
by: Samsung

Firmware
version:
0x3

 

ADVERTISEMENT

 

Introduction

Welcome to Myce’s review of the Samsung SM863 Enterprise
SSD.

The SM863 is available in capacities of 120, 240, 480, 960,
and 1,920GB.  The 480 GB drive is the subject of this review.

Logically, the SM863 replaces the 845DC PRO in Samsung’s
product portfolio.  The SM863 introduces the use of 32 Layer, MLC V-NAND whereas
the 845DC PRO used 24 Layer, MLC V-NAND.

Will the SM863 improve upon the 845DC PRO, which we awarded
our ‘Outstanding’ rating? Please read on to find out.


Market Positioning and Specification

Market Positioning

This is how Samsung positions the SM863 –

 

The SM863 480GB targets write intensive applications and
warrants endurance at 3,080 TBW (Terrabytes Written) over 5 years (which is
equivalent to 3.5 DWPD).    The SM863 uses 32 Layer, 3D MLC V-NAND and Samsung’s
Mercury Controller.

At the outset it appears that the SM863 does not have the
same Random Writes performance or  Endurance as its predecessor the DC 845PRO,
however it is important to note that the SM863 range has decreased the level of
provisioning of NAND for use by the Controller. So for example, the equivalent
DC 845PRO to the SM863 480GB offers only 400GB of usable capacity and has 80GB
more of provisioning for the Controller. So it is possible that if one was to
allow only 400GB of the SM863 480GB’s NAND to be available for user data, then
the performance and endurance of the SM863 480GB would equal or exceed that of
the DC845 PRO 480GB.  We are unable to test for endurance but we will look at
performance in this review. 

Specification

Here is Samsung's specification for the SM863 480GB –

 

Here is a picture of the SM863 that I tested -

 

You can see that the SM863 has the, now familiar, Samsung
black plastic case.

 


Now let's head to the next page, to look at Myce’s
Enterprise Testing Methodology.....

 

NEW PAGE
NEW PAGE NEW PAGE NEW PAGE

Testing Methodology

Please click
here
to view or download a detailed introduction to Myce’s Enterprise Class Solid
State Storage (‘SSS’) Testing Methodology as a PDF.

Put briefly:

All testing is performed on an OakGate Technology test unit

We perform two sets of Performance Tests:

1.          
A full set of the Storage Network Industry Association’s (‘SNIA’) tests
with mandatory parameters, as specified in their Solid State Storage
Performance Test Specification Enterprise V1.0.

2.          
A set of tests, known as the ‘Myce/OakGate Full Characterisation Test
Set’, that provides readers with a fuller characterisation of the solution.

Comprehensive power consumption testing is performed using
Quarch hardware as documented here.

We also review other important factors such as Data
Reliability and Failover features.

A word about SNIA testing – before striking a partnership
with OakGate Technology I spent some time researching how I might implement
SNIA testing using freely available tools such as IOMeter and FIO.  I arrived
at the conclusion that whilst it was theoretically possible it was
impractical.  The reason for this is as without the automation offered by a
test bench, such as the OakGate Unit, the only way to meet the SSS PTS
requirements is to run the maximum number of test cycles and then to manually
look back at the results to determine when/if steady state has been achieved in
the workload specific test cycle, and then harvest the data from the qualifying
Measurement Window. This means that the test runs would always take a maximum
elapsed time, and there would be a great deal of human effort required to
review, gather, and report upon the data.  I empathise with, acknowledge, and
respect the efforts of other reviewers who endeavour to meet the SNIA’s
principles in their testing - I am privileged and thankful to be able to use a
superb test bench which automates the whole process and allows me to meet the
SNIA’s specification in full.

Before we move on, let’s remind ourselves of some basics –

When reviewing the performance of an SSS solution there are
three basic metrics that we look at:

1.          
IOPS – the number of Input/Output Operations per Second

2.          
Bandwidth – the number of bytes transferred per second (usually measured
in Megabytes per second, ‘MB/s’)

3.          
Latency – the amount of time each IO request will take to complete
(usually, in the context of SSS solutions, measured in Microseconds, which are
millionths of a second).

It is true to say that IOPS and Bandwidth had all been
growing rapidly before the advent of SSS solutions, but Latency can only be significantly
decreased by eliminating mechanical devices, and thus Latency is the single
most important aspect that SSS solutions deliver to enhance performance.

Latency in a technical environment is synonymous with delay.
In the context of an SSS solution it is the amount of time between an IO
request being made, and when the request is serviced.

Bandwidth, also commonly referred to as ‘Throughput’, is the
amount of data that can be transferred from a storage device to a host, in a
given amount of time.  In the context of SSS solutions it is typically measured
in Megabytes per second (MB/s). 

A great enterprise SSS solution offers an effective balance
of all three metrics.  High IOPS and Bandwidth is simply not enough if Latency
(the delay in an IO operation) is too high. As we will see in the test results
presented below, as Latency increases IOPS will inevitably decrease.

Queue Depth is the average amount of IO requests
outstanding.  If you are running an application and the Average Queue Depth is
one or higher and CPU utilisation is low, then the application’s performance is
most probably suffering from a ‘Storage Bottleneck’.

Another important factor to remember is that SSS performance
is influenced by previous workloads, not just the current workload, and
especially by what has previously been written to the drive. As specified in
the SNIA SSS PTS the goal of all good Enterprise level testing is to provide
consistent circumstances, so that results can be compared fairly across
different SSS solutions – it is for this reason that all of our tests start
with a purge of the drive, so that it starts in a ‘Fresh Out of the Box’ (FOB)
state.  Most tests then have a pre-conditioning phase where the drive is put
into a ‘Steady State’ before the test phase begins. Put briefly, a ‘Steady
State’ is achieved when the performance of the drive no longer varies over time
and settles into a consistent level of performance for the workload in hand. You
can find a detailed explanation of ‘Steady State’ and how it is determined in
the SNIA tests in our Enterprise Testing Methodology paper, which can be viewed
or downloaded as a PDF by clicking here.

For interest, here are some
generally accepted assumptions that differentiate the use and therefore the
approach to testing Enterprise/Server and Consumer/Client SSS solutions:

Enterprise/Server SSS
assumptions:

1.          
The drive is always full

2.          
The drive is being accessed 100% of the time (i.e. the drive gets no
idle time)

3.          
Failure is catastrophic for many users

4.          
The Enterprise market chooses SSS solutions based on their performance
in steady state, and that steady state, full, and worst case are not the same
thing

Consumer/Client SSS
assumptions:

1.          
The drive typically has less than 50% of its user space occupied

2.          
The drive is accessed around 8 hours per day, 5 days per week, and
typically data is written far less frequently

3.          
Failure is catastrophic for a single user

4.          
The consumer/client market generally chooses SSS solutions based on
their performance in the FOB state

 

Now let's head to the next page, to look at the results
of our SNIA IOPS (Input/Output Operations per Second) Test.....

No posts to display