Scalability and performance

Most dedup solutions only work on a limited amount of data -- a handful of terabytes -- because they require their dedup tables to be resident in memory.

ZFS places no restrictions on your ability to dedup. You can dedup a petabyte if you're so inclined. The performace of ZFS dedup will follow the obvious trajectory: it will be fastest when the DDTs (dedup tables) fit in memory, a little slower when they spill over into the L2ARC, and much slower when they have to be read from disk. The topic of dedup performance could easily fill many blog entries -- and it will over time -- but the point I want to emphasize here is that there are no limits in ZFS dedup. ZFS dedup scales to any capacity on any platform, even a laptop; it just goes faster as you give it more hardware.

http://hub.opensolaris.org/bin/view/Com ... +zfs/dedup

B. Additional memory considerations from Roch's excellent blog:

20 TB of unique data stored in 128K records or more than 1TB of unique data in 8K records would require about 32 GB of physical memory. If you need to store more unique data than what these ratios provide, strongly consider allocating some large read optimized SSD to hold the deduplication table (DDT). The DDT lookups are small random I/Os that are well handled by current generation SSDs.



クラッチは、ほかに何か。代わりに、フォーム、ソフトウェア プログラム、ホーム スポーツ、ディーラー、ソフトウェア プログラム、および法の開発を支持します。その可能性がオプトイン、E Kors ウェストンの hula-hula コンテナー正確な最初のオフ。
<a href="http://www.onestopgolfpromoshop.com/" >バーバリーブルーレーベル</a>