資料講義文稿說明1en us_第1頁
資料講義文稿說明1en us_第2頁
資料講義文稿說明1en us_第3頁
資料講義文稿說明1en us_第4頁
資料講義文稿說明1en us_第5頁
已閱讀5頁,還剩95頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

1、Ceph Storage1.3Storage StrategiesCreating storage strategies for Ceph clusters.Customer ContentServicesCeph Storage 1.3 Storage StrategiesCreating storage strategies for Ceph clusters.Legal NoticeCopyright © 2015, Inc.The text of and illustrations in this document are licensed byunder a Creativ

2、e CommonsAttribution Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version., as the licensor of this document, waives

3、the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.,Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the InfiLogo, and RHCE are trademarks of countries., Inc., registered in the United States and otherLinux ®

4、is the registered trademark of Linus Torvalds in the United States and other countries.Java ® is a registered trademark of Oracle and/or its affiliates.XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.MySQL ®

5、is a registered trademark of MySQL AB in the United States, the European Union and other countries.Node.js ® is an official trademark of Joyent.Software Collections is not formally related toor endorsed by the official Joyent Node.js open source or commercial project.The OpenStack ® Word M

6、ark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed orsponsored by the OpenStack Found

7、ation, or the OpenStack commu.All other trademarks are the property of their respective owners.AbstractThis document provides instructions for creating storage strategies, including creating CRUSH hierarchies, estimating the number of placement groups, determining which type of storage pool to creat

8、e, and managing pools.Table of ContentsTable of Contents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .PREFACE5CHAPTER 1. WHAT ARE STORAGE STRATEGIES?. . . . . . . . . . . . . . . . . . . .

9、. . . . . . . . . . . . . . . . . . . . . . . .6CHAPTER 2. CONFIGURING STORAGE STRATEGIES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7PART I. CRUSH ADMINISTRATION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10、 . . . . . . . .8CHAPTER 3. INTRODUCTION TO CRUSH3.1. DYNAMIC DATA PLACEMENT3.2. FAILURE DOMAINS3.3. PERFORMANCE DOMAINS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9101010CHAPTER 4. CRUSH HIERARCHIES4.1. CRUSH LOCATION4.2. ADD A BUCKET4.3.

11、 MOVE A BUCKET4.4. REMOVE A BUCKET. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1213141515164.5. BUCKET ALGORITHMS (ADVANCED)CHAPTER 5. CEPH OSDS IN CRUSH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12、 . . . . . . . . . . . . . . .5.1. ADDING AN OSD TO CRUSH5.2. MOVING AN OSD WITHIN A CRUSH HIERARCHY5.3. REMOVE AN OSD FROM A CRUSH HIERARCHY17181919CHAPTER 6. CRUSH WEIGHTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21CHAPTE

13、R 7. PRIMARY AFFI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23CHAPTER 8. CRUSH RULES8.1. LIST RULES8.2. DUMP A RULE8.3. ADD A SIMPLE RULE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14、 . . . . . . . . . . . . . . . . .2427272728288.4. ADD AN ERASURE CODE RULE8.5. REMOVE A RULECHAPTER 9. CRUSH TUNABLES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9.1. THE EVOLUTION OF CRUSH TUNABLES9.2. TUNING CRUSH9.3. TUNING

15、CRUSH, THE HARD WAY9.4. LEGACY VALUES2929303132CHAPTER 10. EDITING A CRUSH MAP10.1. GET A CRUSH MAP10.2. DECOMPILE A CRUSH MAP10.3. COMPILE A CRUSH MAP10.4. SET A CRUSH MAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33333333333410.5. CRU

16、SH STORAGE STRATEGY EXAMPLESPART II. PLACEMENT GROUPS (PGS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37CHAPTER 11. ABOUT PLACEMENT GROUPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38C

17、HAPTER 12. PLACEMENT GROUP TRADEOFFS12.1. DATA DURABILITY12.2. DATA DISTRIBUTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4040411Ceph Storage 1.3 Storage Strategies12.3. RESOURCE USAGE42CHAPTER 13. PG COUNT. . . . . . . . . . . . . . . . . . . . . . . .

18、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13.1. CONFIGURING DEFAULT PG COUNTS13.2. PG COUNT FOR SMALL CLUSTERS13.3. CALCULATING PG COUNT434343434413.4.UM PG COUNTCHAPTER 14. PGD LINE REFERENCE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19、. . . . . . . . . . . .45454545454646464614.1. SET THE NUMBER OF PGS14.2. GET THE NUMBER OF PGS14.3. GET A CLUSTERS PG STATISTICS14.4. GET STATISTICS FOR STUCK PGS14.5. GET A PG MAP14.6. GET A PGS STATISTICS14.7. SCRUB A PLACEMENT GROUP14.8. REVERT LOSTPART III. POOLS . . . . . . . . . . . . . . . .

20、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48CHAPTER 15. POOLS AND STORAGE STRATEGIES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50CHAPTER 16. LIST POOLS. . . . . . . . . . . . . . . . . . .

21、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51CHAPTER 17. CREATE A POOL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52CHAPTER 18. SET POOL QUOTAS. . . . . . . . . . . . . . . . . .

22、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55CHAPTER 19. DELETE A POOL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56CHAPTER 20. RENAME A POOL. . . . . . . . . . . . . . . . . . . . . . . . .

23、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57CHAPTER 21. SHOW POOL STATISTICS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58CHAPTER 22. MAKE A SNAPSHOT OF A POOL . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24、 . . . . . . . . . . . . . . . . . . .59CHAPTER 23. REMOVE A SNAPSHOT OF A POOL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60CHAPTER 24. SET POOL VALUES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25、. . . . . .61CHAPTER 25. GET POOL VALUES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65CHAPTER 26. SET THE NUMBER OF OBJECT REPLICAS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68CHAPTER 27. GET THE NUM

26、BER OF OBJECT REPLICAS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69PART IV. ERASURE CODE POOLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70CHAPTER 28. CREATING A SAMPLE ERASURE CODED POOL. . . . . .

27、 . . . . . . . . . . . . . . . . . . . . . . . . . . .71CHAPTER 29. ERASURE CODE PROFILES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72CHAPTER 30. ERASURE-CODED POOLS AND CACHE TIERING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28、. . .74CHAPTER 31. ERASURE CODE PROFILES31.1. OSD ERASURE-CODE-PROFILE SET31.2. OSD ERASURE-CODE-PROFILE RM31.3. OSD ERASURE-CODE-PROFILE GET31.4. OSD ERASURE-CODE-PROFILE LS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75757676762Table of Content

29、sCHAPTER 32. ERASURE CODE PLUGINS (ADVANCED)32.1. JERASURE ERASURE CODE PLUGIN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7878818532.2. LOCALLY REPAIRABLE ERASURE CODE (LRC) PLUGIN32.3. CONTROLLING CRUSH PLACEMENTCHAPTER 33. ISA ERASURE CODE PLUGIN. . . . . . . . . .

30、 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87CHAPTER 34. PYRAMID ERASURE CODE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90PART V. CACHE TIERING. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31、. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91CHAPTER 35. SETTING UP POOLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35.1. SETTING UP A BACKING STORAGE POOL35.2. SETTING UP A CACHE POOL35.3. CREATING A CACHE T

32、IER35.4. CONFIGURING A CACHE TIER35.5. TARGET SIZE AND TYPE35.6. CACHE SIZING35.7. CACHE AGE9292929293939495CHAPTER 36. REMOVING A CACHE TIER. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96969636.1. REMOVING A-ONLY CACHE36.2. REMOVING A WRITEBAC

33、K CACHE3Ceph Storage 1.3 Storage Strategies4PREFACEPREFACEFrom the perspective of a Ceph simple:, interacting with the Ceph storage cluster is remarkably1. Connect to the Cluster2. Create a Pool I/O ContextThis remarkably simple interface is how a Ceph define. Storage strategies are invisible to the

34、 Ceph performance.selects one of the storage strategies you in all but storage capacity and5Ceph Storage 1.3 Storage StrategiesCHAPTER 1. WHAT ARE STORAGE STRATEGIES?A storage strategy is a method of storing data that serves a particular use case. For example, if you need to store volumes and images

35、 for a cloud platform like OpenStack, you might choose to store data on reasonably performant SAS drives with SSD-based journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you might choose to use something more economical, like SATA drives. Ceph can accommo

36、date both scenarios in the same Ceph cluster, but you need a means of providing the SAS/SSD storage strategy to the cloud platform (e.g., Glance and Cinder in OpenStack), and a means of providing SATA storage for your object store.Storage strategies include the storage media (hard drives, SSDs, etc.

37、), the CRUSH maps that set up performance and failure domains for the storage media, the number of placement groups, and the pool interface. Ceph supports multiple storage strategies. Use cases, cost/benefit performance tradeoffs and data durability are the primary considerations that drive storage

38、strategies.1.Use Cases: Ceph provides massive storage capacity, and it supports numerous use cases.For example, the Ceph Block Deviceis a leading storage backend for cloud platformslike OpenStack providing limitless storage for volumes and images with high performancefeatures like copy-on-write clon

39、ing. By contrast, the Ceph Object Gatewayis a leadingstorage backend for cloud platforms that provides RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap,and other data.2.Cost/Benefit of Performance: Faster is better. Bigger is better. High durability is better. H

40、owever, there is a price for each superlative quality, and a corresponding cost/benefit trade off. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data, cache tiers, and journaling. Storing a database or object index

41、 may benefit from a pool of very fast SSDs, but prove too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH

42、hierarchy of OSDs, you need to consider the use case and an acceptable cost/performance trade off.3.Durability: In large scale clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very imp

43、ortant. Ceph addresses data durability with multiple deep copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost/benefit tradeoff: its cheaper to store fewer copies or coding chunks, but it may lead to the inability

44、 to service write requests in a degraded state. Generally, one object with two additional copies (i.e., size = 3) or two coding chunks may allow a cluster to service writes in a degraded state while the cluster recovers. The CRUSH algorithm aids this process by ensuring that Ceph stores additional c

45、opies or coding chunks in different locations within the cluster. This ensures that the failure of a single storage device or node doesnt lead to a loss of all of the copies or coding chunks necessary to preclude data loss.You can capture use cases, cost/benefit performance tradeoffs and data durabi

46、lity in a storagestrategy and present it to a Cephas a storage pool.ImportantCephs object copies or coding chunks make RAID obsolete. Do not use RAID, becauseCeph aly handles data durability, a degraded RAID has a negative impact onperformance, and recovering data using RAID is subst copies or erasu

47、re coding chunks.ally slower than using deep6CHAPTER 2. CONFIGURING STORAGE STRATEGIESCHAPTER 2. CONFIGURING STORAGE STRATEGIESConfiguring storage strategies is about assigning Ceph OSDs to a CRUSH hierarchy, defining thenumber of placement groups for a pool, and creating a pool. Teral steps are:1.D

48、efine a Storage Strategy: Storage strategies require you to analyze your use case, cost/benefit performance tradeoffs and data durability. Then, you create OSDs suitable for that use case. For example, you can create SSD-backed OSDs for a high performance pool or cache tier; SAS drive/SSD journal-ba

49、cked OSDs for high-performance block device volumes and images; or, SATA-backed OSDs for low cost storage. Ideally, each OSD for a use case should have the same hardware configuration so that you have a consistent performance profile.2.Define a CRUSH Hierarchy: Ceph rules select a node (usually the

50、root) in a CRUSH hierarchy, and identify the appropriate OSDs for storing placement groups and the objects they contain. You must create a CRUSH hierarchy and a CRUSH rule for your storage strategy (cache tiers require one for the hot tier and one for the cold or backing tier). CRUSH hierarchies get

51、 assigned directly to a pool by the CRUSH ruleset setting; however, cache tiers are assigned to the backing tier, and the backing tier ruleset gets assigned to the pool.3.Calculate Placement Groups: Ceph shards a pool into placement groups. You need to set an appropriate number of placement groups f

52、or your pool, and remain within a healthyum number of placement groups in the event that you assign multiple pools to the same CRUSH ruleset.4.Create a Pool: Finally, you must create a pool and determine whether it uses replicated or erasure-coded storage and whether it supports a cache tier. You mu

53、st set the number of placement groups for the pool, the ruleset for the pool and the durability (size or K+M coding chunks).Remember, the pool is the Ceph completely transparent to the Cephs interface to the storage cluster, but the storage strategy is (except for capacity and performance).7Ceph Sto

54、rage 1.3 Storage StrategiesPART I. CRUSH ADMINISTRATIONAny suff c ent y advanced techno ogy s nd st ngu shab e from mag c.- Arthur C. C arke8CHAPTER 3. INTRODUCTION TO CRUSHCHAPTER 3. INTRODUCTION TO CRUSHThe CRUSH (Controlled Replication Under Scalable Hashing) algorithm determines how to store and

55、 retrieve data by computing data storage locations. The CRUSH map for your storage cluster describes your device locations within CRUSH hierarchies and a ruleset for each hierarchy that determines how Ceph will store data.The CRUSH map contains at least one hierarchy of nodes and leaves. The nodes o

56、f a hierarchy called "buckets" in Ceph are any aggregation of storage locations (e.g., rows, racks, chassis, hosts, etc.) as defined by their type. Each leaf of the hierarchy consists essentially of one of thestorage devices in the list of storage devices (note: storage devices or OSDs ade

57、d to theCRUSH map when you add an OSD to the cluster). A leaf is always contained in one node or "bucket." A CRUSH map also has a list of rules that tell CRUSH how it should store and retrieve data.The CRUSH algorithm distributes data objects among storage devices according to a per-device weight value, approximating a uniform probability distribution. CRUSH distributes objects and their replicas (or coding chunks) according to the hierarchical cluster map you define. Your CRUSH map represents the available storage devices and the logical buckets that c

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論