Larger SSDs Have More Data Channels and DRAM
An SSD consists of NAND memory chips arranged in clusters, connected to an SSD controller. The controller is an intelligent device that decides where to physically store data on the SSD.
With more clusters of memory chips, each connected to the controller by a dedicated data bus, the controller can read and write data in parallel. The more clusters you have, the more independent paths there are to move data back and forth. This has an additive effect on performance since each independent memory module can send and receive data without affecting
Smaller Drives Get Fuller Faster
SSDs are fastest when they are new and relatively empty. That’s because an SSD has to erase an entire block of memory cells before writing to it. If all the memory cells are empty, the drive simply writes to the empty space. However, if the block is partially filled, the drive first has to copy the existing data into a cache, erase the block, then write to the new empty block.
This adds overhead to the drive’s operations and slows things down. This is why SSDs erase memory blocks in the background that have been marked for deletion and do “housekeeping” to consolidate data, minimizing partially-filled memory blocks.
The more full your SSD is, the fewer empty blocks there are to write to, and larger SSDs are less likely to be as full as smaller drives. This is another reason smaller drives may degrade in performance more quickly.
There’s More to SSD Performance
While having more memory modules in parallel does increase performance, this is only one aspect of SSD performance. The type of memory affects the fundamental speed at which memory blocks can be erased and written, so if the two drives you’re comparing also differ in the type of memory modules they use, that has an impact on any performance difference.
The SSD controller is absolutely crucial to performance as well. The intelligence of the controller when it comes to predicting which data to cache or how to shuffle data around to ensure the drive is always performing well has significant real-world effects. In other words, an SSD’s brain matters just as much as its brawn.
Speaking of caches, larger drives may have proportionally larger allocations of cache memory. The larger a drive’s cache is, the faster its performance will be when sustaining large transfers or responding quickly to frequent data requests. The same is true for mechanical hard disk drives (HDDs), where two drives that are otherwise identical will perform better on the drive with more cache memory.
Why Not Make Small SSDs More Parallel?
Inevitably, one has to wonder why smaller drives aren’t given the same number of memory modules as larger drives. Simply make the modules smaller, right? This would work in theory, but the economic realities of memory production make this a bad idea.
There’s a cost floor below which you cannot make a memory module no matter how low its capacity is, because certain aspects of manufacturing are fixed regardless of module capacity. You’ll see something similar with traditional mechanical drives. There may be no difference in the cost of making a 120GB and 250GB mechanical hard drive, as an example. That means no one is going to make the smaller capacity drive.
The memory modules used with smaller SSDs represent the best balance between per-module cost and capacity. In other words, a drive with more modules but less capacity per module would cost the same as a larger drive with the same number of modules.