samsung-evo-250-and-750Capitalizing on their success with the 840 series, Samsung released their new SSD TLC based drives named EVO. With this new line, Samsung broadens their offer with five drives ranging from 120GB up to 1TB.

The first generation of Samsung TLC SSD stacked up with MLC drives on the read I/O but lagged behind in term of writes I/O. Looking at the data-sheets from Samsung for the 840 EVO, it appeared that Samsung has closed the gap between MLC and TLC. It shows that performance is not solely based on NAND architecture but other components are almost equally important. Let’s check the specifications then I will dive into the review with the Amazon “SSD best seller”, the EVO 250GB and the EVO 750GB.


What’s new?

The 128Gb TLC NAND opened up access to larger capacity, up to 1TB.

SATA 3.1 support: Among the improvements in the SATA 3.1, Queued TRIM Command is the most relevant to SSDs.

New controller, EMX: The fifth generation is still a triple-core ARM cortex R4 but is clocked at 400Mhz, a 100Mhz bump or a 33% performance increase.


TurboWrite: This is the technology at the core of the write I/O improvement. Samsung converted the space normally used for overprovisioning in the first generation of the 840 into a write buffer area. That reserved amount is used as emulated SLC, which results in faster write I/O. The table below shows the reserved amount depending on the drive size.


Here is how it works, as long as the buffer is not full, the writes performs at SLC speed. After that it drops back to TLC speed until the data is transferred to the TLC nand. The buffer is flushed when the drive is idle. Based on my personal computer usage pattern, my host writes about 10GB/day. I will assume that, based on a SLC buffer size of 3GB, I will mostly see SLC write performance. Sandisk’s Extreme II uses a similar technology, called nCache, although the reserved area is smaller, around 1GB.

RAPID is the fancy acronym for Real Time Accelerated Processing of I/O Data. It is result of a joint collaboration between Samsung and NVELO, now a subsidiary of Samsung since December 2012. To my knowledge, only the EVO line offers host SDRAM caching for SSDs. The RAPID technology caches reads and writes. The cache size is dynamic, 25% of the RAM host capacity up to 1GB, split 50/50 between read and write.


The read cache is persistent after reboot, which basically means that it writes a copy of the data map on disk every so often. It keeps track of the hot data based on access, frequency and at the data block level. Which makes sense, since you don’t want to fill the cache with one large file. One good example would be caching a PST file, which is usually large. RAPID would only cache the most accessed data blocks, like the last couple of days, instead of loading the whole file into its cache. I see it as an improved version of Windows SuperFetch because it manages not only applications and but also user data. Based on the RAPID description, the caching algorithm used is probably LFU (Least Frequently Used). In this method, each read block is assign a counter, repeated accesses to the block increment the counter. The blocks with the highest counter stay cached.

The write cache mostly focuses on small random I/O by collating the data and writes it back in as larger blocks. As such, write I/O performance should see a performance increase with the RAPID enabled. Asked about the potential risk of losing data due to power loss for instance, Samsung stated that it is as safe as the Write Caching policy in Windows, as RAPID complies with the flush buffer command. Windows does warn about potential data loss with Write-caching policy.


The concept behind write caching is to improve performance. When applications write to a file, the O.S. buffers the data into a reserved RAM area and delays the write back to the disk. Data are committed at regular intervals to minimize data corruption in case of power loss or system crash.


RAPID is a filter driver and requires Magician 4.2.1 or higher and has some limitations. The feature is not enabled by default. 840 PRO support in the fall 2013.

– Only supported in Windows 7 or 8.
– RAPID can only be enabled for one EVO physical drive, partitions are not supported.
– No RAID support.
– May not be compatible with nVidia controllers

Software Package

In case the user does not feel like reinstalling the Operating System from scratch, Data Migration is pretty straight forward tool. The utility will always detect the OS drive as the source, which is a good failsafe. It only works if at least one SSD is a Samsung. As far as I know, it is Windows


The Magician 4.0, also Windows only, is well designed and all vital information is readily available. Firmware update and customized overprovisioning can be set up with one click of the mouse.


With a couple of mouse clicks, the OS is optimized for the SSD. There is no need to navigate through Windows register keys and make changes. If there is still some hesitation about which configuration to pick, go for the safest, “Maximum Reliability”.

samsung-magician-optimization "THE MAGICIAN HAS TO BE THE BEST SSD UTILITY."

The Magician has to be the best SSD utility. The only feature that could be improved is the “Performance Optimization”. It is basically a manual TRIM for Windows versions prior to Windows 7. It would have been nice to be able to schedule the task like the Intel Toolbox. I am really nitpicking here though since it is only useful if the Windows version does not support TRIM.

Before we start crunching the benchmark numbers, let’s review the testing protocol and which performance metrics are to look for.

Testing protocol

I went through most of the popular benchmark tools, AS SSD, CrystalDiskMark, ATTO, IoMeter, Anvil’s Storage Utility RC6 and PCMark Vantage. But I also used performance monitoring tools such as DiskMon and hIOmon, primarily to validate the tests. Instead of posting chart after chart, I believe, as a consumer, what is important is how the product fits the needs and not chasing after uber high numbers which are only attainable during benchmarking. I narrowed it down to Anvil’s Storage Utility and PC Mark Vantage Licensed Pro version.

Drive conditioning: The SSDs were prepped with Windows 7 (from an image), filled to about 50% of the storage capacity and benchmarks were run from the tested unit acting as the OS drive.

Steady state: This state occurred overtime when the drive went through enough write cycles, or to be more specific program/erase (P/E) cycles, that write performances were consistent or stable. It may take a few weeks before the SSD reaches it, depending on the computing usage but it can be accelerated using IoMeter.

In summary, Steady State is: Written Data = User capacity x 2, at least.


What numbers are relevant in a real world usage?

Keep in mind that unlike synthetic benchmarks which perform only one specific operation at the time for a predetermined duration, seq read, then seq write then random read, and so on and so forth, real world usage paints a different picture. All four access types can occur at any time, and different transfer rates and different (I/O access) percentages. For instance, a storage subsystem on a streaming server would mostly see high seq read I/O, large block reads, with very little to none write. Looking at a database server without blob data type, we would probably see 75% random read, 20% random write and 5% random and seq write. I could either guesstimate the different ratios or figure a method to define a more accurate I/O usage baseline.

I/O Baseline

While it is entertaining to run a bunch of benchmarking tools, expecting huge numbers, the purpose of testing the units is to get a good look at how they perform under realistic desktop usage pattern. That is why I picked PCMark Vantage suite as my usage pattern. By capturing and analyzing I/O during the PCVM run, disk operations are breakdown to percentage read vs. write, random vs. sequential, queue depth and average file transfer size.

With that information, benchmarking makes more sense since all the numbers do not carry the same importance thus some results are more valuable than others.


In summary, I/O pattern defines what I need from the device vs. what can the device do overall.

The I/O baseline process was explained in the Intel 525 mSATA review.


From the numbers, I rated the I/O usage by activity as follow: Random Read > Random Write > Seq Read > Seq Write and average file size is 128K.

To cover Queue Depth, I used hIOmon during the PC Vantage full run. There is a trial version for a week which is enough time to build the baseline. Based on the chart below, it is obvious that a benchmark score from a QD 16 (or more) does not carry the same weight as a score from a QD 1.



samsung-evo-no-rapid-anvil-250Samsung EVO 250GB RAPID disabled


Samsung EVO 250GB RAPID enabled


READ 4K -QD1 – QD4 – QD16 (Higher is better)


READ 32K – 128K – SEQ 4MB (Higher is better)

Compared to the 840, the EVO displays a 30%+ increase in performance mostly due to the new and faster controller MEX. The read performance matches the Samsung PRO version.

With RAPID enabled and after the 3rd run, the numbers clearly showed that the data are read from the host RAM. Remember that SATA III throughput tops off at 550MB/s.


WRITE 4K QD1 – QD4 – QD16 – SEQ 4MB (Higher is better)

 Write I/O is the area where the Turbowrite improved greatly compared to the previous TLC 840 generation. It would max out at 240 MB/s while the EVO clocks around 490 MB/s.


PCMark Vantage – HHD- Productivity – Gaming (Higher is better)

PC Mark Vantage showed little difference between all Samsung SSDs. With RAPID enabled and after a few runs, the PCMV HDD benchmark displays a larger increase in performance since it is heavily I/O bound.

The final chart below summarizes cost vs. benchmark scores, life expectancy in years, storage capacity and warranty. In other words, this is the “Bang For The Buck” chart. Amazon prices as of October 2013.




TurboWrite brings 840 PRO performance to the EVO line. For everyday usage and outside of files copy, I do not expect to see I/O writes greater than 3GB at one time. So as long as the write I/O stays under the 3GB threshold for the 250GB EVO, I will be seeing PRO performance without the PRO price tag.

RAPID is an interesting approach, tapping into unused computer host resource to improve disk system performance. There is talk about letting the user customizes the cache size, which is a good thing. I would like to see the following features added as well:
– Display the cache hits percentage.
– Provide read only caching as an option as well. Read only is safer than read/write cache since the data is already stored therefore there is no risk of data loss. Also, in a desktop environment, the I/O is about 75% read and 25% write. I believe a read only persistent caching with 2GB+ of RAM, would improve the OS responsiveness.

Although large capacity SSDs, 500GB+, are getting “affordable”, from a price per GB standpoint, they are still a luxury item. SSD is the answer to the disk system bottleneck. At this time, SSD is about performance not storage capacity. Assuming there is a need for a total of 1TB of disk space. I believe the ideal desktop disk system setup, cost and performance wise, would be, two 250GB SSD, one for the OS, one for the applications and a 500GB platter disk for storage/backup, rather than one 1TB SSD. While the cost saving is obvious, from a performance standpoint, two SSDs would spread the read/write load and reduce I/O contention. It does not make sense to pay premium for storage, $0.62/GB, 1TB SSD, when you can get $0.048/GB, 3TB for $145. For those who can afford large capacity SSDs, the EVO 750GB and 1TB offer an alternative to the Crucial M500 960GB.

Overall, I found it quite impressive the way Samsung is dominating the SSD mainstream market and with TLC NAND nonetheless.

EVO vs. PRO, which one is “better”?

From a NAND standpoint, MLC is “better” than TLC at sequential write and P/E cycles. Keep in mind though, in an user environment, the sequential and/or high depth queue write count for less and 1% of the host I/O. Regarding durability, based on a P/E of 1K per cell, that would put the TLC life expectancy at 20 years with a 10GB/day. However, Samsung claims this generation of TLC NAND is rated for 2K+ per cell.

From a cost/performance perspective, with a host write around 15GB/day, the EVO is “better”. The Turbowrite technology and the new controller brings the EVO to the PRO performance for less.

The PRO has a “better” warranty, 5 years vs. 3 years.

The PRO is “better” if there is a need for large write I/O operations, rendering or video editing for instance. The PRO version performance remains consistent compared to the EVO once the written data is greater than the SLC buffer size.

Taking in consideration, my usage pattern and cost per performance, I would have no hesitation to put an EVO in my main computer. What would you do? Share with us your thoughts in the comment section below!

Filed in Computers >Featured >Reviews. Read more about Samsung, Ssd and Storage.

Related Articles
User Comments