Following the launch of the EVO 2.5″ last summer, Samsung packed all the goodness of the EVO into an mSATA format, and in this review we are testing the Samsung 840 EVO mSATA 1TB in particular. Aside from the 750GB model, the mSATA line offers the same capacity as the 2.5″ EVO line, ranging from 120 GB up to 1 TB.
Not only Samsung is still the only manufacturer using TLC or 3-bit MLC, they are also the first one to retail a 1 TB mSATA SSD. Prior to the Samsung EVO mSATA, the two other high capacity products are, Crucial M500 and Mushkin Atlas, both at 480GB.
|Capacity||120 GB / 250GB / 500GB / 1TB|
|AMZ price (5/2014)||$99 / $169 / $290 / $563|
|Cost per GB||$0.82 / $0.67 / $0.58 / $0.55|
|NAND Flash||1y nm Triple Level Cell (TLC) Toggle NAND 2.0|
|Sequential Write||520 MB/s|
|Sequential Read||540 MB/s|
|Random Read||98K IOPS|
|Random Write||90K IOPS|
|Trim Support||Y (with supported OS)|
|RAID Support||Y (with RAID Controller)|
|Security||AES 256-bit Encryption|
TCG Opal 2.0 / IEEE 1667 compliant
|Dimension||2.00” x 1.18” x 0.15”|
Internals and Features
The PCB hosts 2 NAND chips on one side. On the other side is crammed two more NAND chips, the MEX controller and 1GB of LPDDR2 at 1066MHz cache. On the 1 TB unit, each NAND offers a capacity of 256GB.
Features wise, the mSATA is pretty much identical to the 2.5″ EVO series, already explained in my “Samsung EVO 250 and 750 GB” review. Here are the highlights.
At the core of the EVO write performance is the TurboWrite technology. Although the NAND is TLC or MLC 3-bit, a small amount of the NAND is used as emulated SLC, which provides higher writes I/O compare to TLC. As long as the writes stay under the buffer size, write I/O should perform as SLC speed. Once the buffer is full, subsequent writes fall back to TLC performance.
|Samsung EVO Capacity||120 GB||250 GB||500 GB||750 GB||1 TB|
|TurboWrite Buffer Size||3 GB||3 GB||6 GB||9 GB||12 GB|
Outside of ripping BDs, videos or heavy images editing or large file copies, it’s unlikely in a home computer usage pattern to write 3GB+ of data continuously. In most cases, users would be able to take advantage of the SLC speed at the TLC price.
The RAPID technology uses the computer host SDRAM as an extra level of cache. The cache size is dynamic, 25% of the RAM up to 1GB, split 50/50 between read and write. The most noticeable about the write cache is it focuses on small random I/O by collating the data and writes it back in larger blocks. The read cache is persistent, meaning a copy of the data map is written to disk every so often.
In a laptop configuration with 4GB of RAM, I would think twice about enabling RAPID. If the every day memory usage is around or under 2.5GB, I would go for it, around of above 3.5GB, I would recommend against RAPID. I would not want to take the risk to max out the RAM and the system starts swapping on disk. That would be counter productive.
Dynamic Thermal Guard
Heat is a concern when dealing with an enclosed environnement such as a laptop or notebook when air flow is premium. The Dynamic Thermal Guard throttles the controller clock if the temperature reach a certain threshold.
In case the user does not feel like reinstalling the Operating System from scratch, Data Migration is pretty straight forward tool. The utility will always detect the OS drive as the source, which is a good failsafe. It only works if at least one SSD is a Samsung. As far as I know, it is Windows only.
The Magician 4.3, also Windows only, is well designed and all vital information is readily available. Firmware update and customized overprovisioning can be set up with one click of the mouse.
With a couple of mouse clicks, the OS is optimized for the SSD. There is no need to navigate through Windows registry keys and make changes. In doubt, “Maximum Reliability” is a good bet.
The RAPID mode is enabled via the Magician with one click of the mouse followed by a system restart.
Overall, the Magician has to be the best SSD utility. The only feature that could be improved is the “Performance Optimization”. It is basically a manual TRIM for Windows versions prior to Windows 7. It would have been nice to be able to schedule the task like the Intel Toolbox. I am really nitpicking here though since it is only useful if the Windows version does not support TRIM.
I went through most of the popular benchmark tools, AS SSD, CrystalDiskMark, ATTO, IoMeter, Anvil’s Storage Utility v1.1.0 and PCMark Vantage. But I also used performance monitoring tools such as DiskMon and hIOmon, primarily to validate the tests. Instead of posting chart after chart, I believe, as a consumer, what is important is how the product fits the needs and not chasing after uber high numbers which are only attainable during benchmarking. I narrowed it down to Anvil’s Storage Utility and PC Mark Vantage Licensed Pro version.
Drive conditioning: The SSDs were prepped with Windows 7 (from an image), filled with about 120GB of data total and benchmarks were run from the tested unit acting as the OS drive.
Steady state: This state occurred overtime when the drive went through enough write cycles, or to be more specific program/erase (P/E) cycles, that write performances were consistent or stable. It may take a few weeks before the SSD reaches it, depending on the computing usage but it can be accelerated using IoMeter.
In summary, Steady State is: Written Data = User capacity x 2, at least.
|Benchmark Workstation Main Components|
|CPU||Intel Core i3-2120 Processor @3.30GHz|
|Motherboard||ASUSTeK Computer INC. P8Z77-V LX|
|RAM||8,192 MB (4,096 x 2) PNY Optima DDR3 – PC3-10666|
|GPU||NVIDIA GeForce GTS 450|
|OS||Windows 7 Pro 64bits|
|Storage Driver||iaStorA 220.127.116.111|
|OS Hard Drive||The reviewed SSD unit|
What numbers are relevant in a real world usage?
Keep in mind that unlike synthetic benchmarks which perform only one specific operation at the time for a predetermined duration, seq read, then seq write then random read, and so on and so forth, real world usage paints a different picture. All four access types can occur at any time, and different transfer rates and different (I/O access) percentages. For instance, a storage subsystem on a streaming server would mostly see high seq read I/O, large block reads, with very little to none write. Looking at a database server without blob data type, we would probably see 75% random read, 20% random write and 5% random and seq write. I could either guesstimate the different ratios or figure a method to define a more accurate I/O usage baseline.
While it is entertaining to run a bunch of benchmarking tools, expecting huge numbers, the purpose of testing the units is to get a good look at how they perform under realistic desktop usage pattern. That is why I picked PCMark Vantage suite as my usage pattern. By capturing and analyzing I/O during the PCVM run, disk operations are breakdown to percentage read vs. write, random vs. sequential, queue depth and average file transfer size.
With that information, benchmarking makes more sense since all the numbers do not carry the same importance thus some results are more valuable than others.
In summary, I/O pattern defines what I need from the device vs. what can the device do overall.
The I/O baseline process was explained in the Intel 525 mSATA review.
From the numbers, I rated the I/O usage by activity as follow: Random Read > Random Write > Seq Read > Seq Write and average file size is 128K.
To cover Queue Depth, I used hIOmon during the PC Vantage full run. There is a trial version for a week which is enough time to build the baseline. Based on the chart below, it is obvious that a benchmark score from a QD 16 (or more) does not carry the same weight as a score from a QD 1.
Despite being 1/4 of the size of the 2.5″ SSD EVO, Anvil’s Storage Utilities read and write scores shows very little difference between the two formats.
PCMark Vantage scores confirm the ASU results. The performance between the 2.5″ and the mSATA format is minimal. With RAPID enabled, the numbers are off the charts, at least, from the benchmark scores. In my daily usage, I would not be able to tell if RAPID was enabled or not. I do believe there should be an difference in performance since using DRAM cache should improve the speed. My explanation is that, my home computing usage pattern does not stress the SSD enough to notice the improvement.
Do not get me wrong, I am glad that the RAPID is available as a feature. I do wish that it can be a bit more customized such as RAM size allocation and other cache modes such as, read only, write only and read/write.
There is always some doubts regarding the reliability of Samsung 3-bit MLC compared to the 2-bits MLC. The misconception is that TLC NAND based SSD are not reliable due to the 3 bits per cell. The keyword is reliability. TLC has a lower endurance compared to MLC but it is reliable. Introduced during the Samsung SSD Global Summit in 9/2012, we would have heard about it by now if the TLC were to be found unreliable. The TLC is getting validating for server/enterprise use with the Samsung PM853T. I am by no means suggesting to anyone to buy TLC over MLC. I just would like the potential SSD buyer to get the full picture.
Speaking of buying, based on 5/2014 Amazon prices, the cost per GB is very competitive when compared with the Crucial M500 and the Mushkin Atlas.
|Model||Capacity||AMZ 5/2014 price||$/GB|
|Mushkin Atlas||480 GB||$246||0.51|
|Crucial M500||480 GB||$220||0.46|
|Samsung EVO mSATA||500 GB||$290||0.58|
|Samsung EVO mSATA||1024||$563||0.55|
Last word, there are not any weaknesses to be found with the Samsung mSATA EVO.
Next Story: Samsung Galaxy S3 Review