The study

Our thesis was programmed as a small application that could be downloaded through Google and MS gadget library. It was a free application that checks the fragmentation level of the users' disks. More than 300,000 users has participated in this stage, and 100,000 of them has respomded with the value they obtained from the calculation to allow further analysis.

The factor (We called it Lace level) measured the fragmentation level is a statistical figure which solely relays to the disorder of the data on the device.

The results

Analysing the results, we discovered that the disk size does not play a major roll in the effects of disk fragmentation. We found slowdown symptoms also on disks that were not full. Therefore, the upper limit of the calculation was not set to the disk size - but to the highest allocated point on the disk. the following graph shows the fragmentation level distribution among the disks that were checked:

The fact that the distribution found is Gaussian ("Hat shape") had encouraged our decision to continue exploring the phenomena deeper.

Analyzing the results

Only 2% of the checked disks were found to be in critical status, but 10% are very close to this state.
Measuring the success of disk fragmentation according to the initial fragmentation level pointed that waiting with maintenance until is enters the RED zone it is significantly no longer effective.
Tested on a mirrored server (In which every update was written to both disks) one disk with automatic "De-fragmentation" at a fixed level, and the other disk with current method (Activated then the user sees slowdown symptoms) the difference can bee easily seen. Maintenance of storage devices only after symptoms appears - gave poor results.

The patent

Both, the point after which maintenance is futile and the method to achieve the best results were applied for a patent. Our application was granted, and another one is still pending. Maintenance Software developers - Please contact us in order to implement our method

What is it good for?

Grouping phisical disks is clusters and refering to them as a single giant virtual disk has created a new problem. The internal disorder in the virtual disk increases, but maintaining this virtual disk (Traditional Defragmentation) is out of the question. It is too big.
Measuring the disorder level of the virtual disk is our exclusive patent. You will first get an idea if you have a problem or not.
Using our defragmentation method without stopping the service is also covered by the patent. The concept is simple. We find the most fragmented file that moving it into a new place will not increase the overall fragmentation of the disk. (Just remember that moving the highest fragmented file will create many more empty spaces - and increases the overall fragmentation)
Repeating the process will finally make the work. There is no limitation of minimum free space to start the process.
Did we come too early with a solution - before others understand the problem?

Publications

SYSTOR 2010 International Storage research conference hosted by IBM Labs

CET Consumer Electronics Times