The partition has no negative impact on SSD
life. If there is a negative impact on the design of the SSD vendor, there is a problem. For instance。 For example, a 200G's mechanical hard disk
and a 200G solid state hard disk are divided into 2 100G partitions: C and D disk
. And then have to write C, D disk without. In this case, for mechanical hard disks, the physical space of 100G is wasted, and nothing has been written in . But it does not have any effect on the solid state disk, because the firmware (Firmware) can dynamically set the logical address of 0~100G (LBA logical, block address) mapped to the entire 200+G physical space (200+G instead of 200G, because the real capacity of SSD
is greater than the nominal capacity, because of the need of extra space to move data convenient background defragmentation). So it's not like a mechanical hard disk that writes only the physical medium of 100G, causing this 100G to hang out first. More counterintuitive is to write only 100G, while the other 100G doesn't write at all, but it will extend the life of SSD
(life is defined as the total amount of data written before SSD hangs up). There is a concept of cold and hot data in the storage field (cold and hot data, which is not very clear how the Chinese literature is generally translated). In the above example, the 100G logical address that is written without stopping is hot LBA, and the logical address of 100G belongs to cold LBA. Each SSD vendor has more or less in the firmware, or a good or effective algorithm to distinguish hot/cold data and treat it differently. The use of heat and cold is the premise of the treatment of cold and hot data. The WAF (write amplification factor) is smaller than the complete probability of writing any LBA, so the life expectancy is longer. Of course, if there is no ability to distinguish between cold and hot data in SSD, and the user's condition has obvious cold / hot data, it will have a negative impact on life expectancy. But it doesn't have anything to do with your partitions. The first 512 byte data block (logical block) of your D disk is partitioned. Its LBA is 100G/512 + 1, no partition, or 100G/512 + 1, which is not affected. In the end, it's all nonsense. For ordinary users, do not overestimate their ability to make data. You basically didn't give a piece of SSD could produce enough flash (PE) SSD NAND flash times lead to aging and hang up. There was a very interesting report on Flash Memory Summit last year. The data of an enterprise - level storage server manufacturer tracking SSD usage in a product is that 97% enterprise level SSD users write less than 0.2 PE a day. As for ordinary users, only less. Bug or SSD firmware in a flash chip is defective when you slip through the inspection or the room humidity is too high resulting in a circuit out of the original problem, cause probability SSD hang possible than the probability of SSD is written up much. The conclusion is that you want to partition the partition and don't worry about the life span. The more general conclusion is how to use it, pay more attention to speed, and pay less attention to life.  now Shingle technology has been widely applied to mechanical hard disk, so this example is only applicable to some mechanical hard disk and previous mechanical hard disk.  here I started the wrong conclusion, then check the original report "All-Flash Arrays Require Scalable CostEfficient Software-Defined Architectures" by Shachar Fienblit, found that 97% enterprise users a day to a total capacity of 0.15*SSD data, however, this is written before the data write amplification, amplification is about on the day the SSD write it again. But the cheapest enterprise SSD
is generally designed by writing the entire SSD
1 days before writing and enlarging, so the amount of the data is still very small. The consumption level SSD can't find data, but it should be similar.