Published on March 11, 2014
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar Michael Holzerland: firstname.lastname@example.org Paul Brook Paul_brook@dell.com Twitter @paulbrookatdell
Confidential Agenda: Introduction Inktank & Dell Dell Crowbar Automation, scale Best Practice with CEPH Cluster Best Practice Networking Whitepapers Crowbar Demo
3 Confidential Dell is a certified reseller of Inktank Services, Support and Training. • Need to Access and buy Inktank Services & Support? • Inktank 1-year subscription packages – Inktank Pre-Production subscription – Gold (24*7) Subscription • Inktank Professional Services – Ceph Pro services Starter Pack – Additional days services options • Ceph Training from Inktank – Inktank Ceph100 Fundamentals Training – Inktank Ceph110 Operations and Tuning Training – Inktank Ceph120 Ceph and OpenStack Training
OPS SW Dell OpenStack Cloud Solution HW SW OPS “Crowbar” CloudOps Software Services & Consulting Reference Architecture Confidential4
Components Involved http://docs.openstack.org/trunk/openstack-compute/admin/content/conceptual-architecture.html
Data Center Solutions Crowbar
4) Ergänzende ProdukteDell“Crowbar” OpsManagement Core Components & Operating Systems Cloud Infrastructure Physical Resources APIs, User Access, & Ecosystem Partners
Quantum Cinder SwiftNova Support Proxy Dashboard Store Nodes (min 3 nodes) Controller API API UI API API Block Device (SAN/NAS/DAS) Scheduler Keystone Compute Nodes Controller Database Glance API RabbitMQ API #8 #9 #3 #4 #2 #5 Barclamps! Automatisierte und einfache Installation #7 #6 #1
Crowbar Landingpage • http://crowbar.github.io/ 2/28/20149
Object Storage Daemons (OSD) • Allocate sufficient CPU cycles and memory per OSD –2GB memory and 1GHz of AMD or Xeon CPU cycles per OSD –Hyper Threading can be used in Xeon Sandybridge and UP • Use SSDs as dedicated Journal devices to improve random latency –Some workloads benefit from separate journal devices on SSDs –Rule of Thumb: 6 OSD for 1 SSD • No Raid Controller –Just JBOD 11
Ceph Cluster Monitors • Best practice to deploy monitor role on dedicated hardware – Not resource intensive but critical – Using separate hardware ensures no contention for resources • Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off resources • Deploy an odd number of monitors (3 or 5) – Need to have an odd number of monitors for quorum voting – Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 – Main reason to go to 7 is to have redundancy in fault zones • Add redundancy to monitor nodes as appropriate – Make sure the monitor nodes are distributed across fault zones – Consider refactoring fault zones if needing more than 7 monitors 12
Potential Dell Server Hardware Choices • Rackable Storage Node – Dell PowerEdge R720XD or R515 – INTEL Xeon 2603v2 or AMD C32 Plattform – 32GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 4TB SATA drives – 2x 10GbE, 1x 1GbE, IPMI • Bladed Storage Node – Dell PowerEdge C8000XD Disk and PowerEdge C8220 CPU – 2x Xeon E5-2603v2 CPU, 32GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 4TB NL SAS drive – 2x 10GbE, 1x 1GbE, IPMI • Monitor Node – Dell PowerEdge R415 – 2x 1TB SATA – 1x 10GbE Confidential13
Configure Networking within the Rack • Each Pod (e.g., row of racks) contains two Spine switches • Each Leaf switch is redundantly uplinked to each Spine switch • Spine switches are redundantly linked to each other with 2x 40GbE • Each Spine switch has three uplinks to other pods with 3x 40GbE 14 10GbE link 40GbE link High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed Top-of-Rack (Leaf) Switch Nodes in Rack High-Speed End-of-Row (Spine) Switch High-Speed End-of-Row (Spine) Switch To Other Rows (Pods) To Other Rows (Pods)
Networking Overview • Plan for low latency and high bandwidth • Use 10GbE switches within the rack • Use 40GbE uplinks between racks • One option: Dell Force10 S4810 switches with port aggregation & Force10 S6000 for aggregation Level with 40GigE 15
Questions? - Demo -
Presentación que realice en el Evento Nacional de Gobierno Abierto, realizado los ...
In this presentation we will describe our experience developing with a highly dyna...
Presentation to the LITA Forum 7th November 2014 Albuquerque, NM
Un recorrido por los cambios que nos generará el wearabletech en el futuro
Um paralelo entre as novidades & mercado em Wearable Computing e Tecnologias Assis...
Ceph Day Frankfurt. Ceph Days In Frankfurt. ... Dell Solution Center Frankfurt, Germany ... Ceph Deployment with Dell Crowbar Dell ...
Deploying Ceph with a Crowbar. Posted by ... thanks to collaboration between Inktank and Dell there is a really solid deployment pathway using Dell’s ...
The Crowbar deployment is definitely a solid option if you are evaluating ... > TechCenter > Dell TechCenter > OpenStack: Deploying Ceph with Dell Crowbar ...
Die Dell Community mit Forum, Blogs, Wikis und dem Dell TechCenter.
README.md Crowbar: Ceph. The code and documentation is distributed under the Apache 2 license. Contributions back to the source are encouraged.
"Wicked Easy Ceph Block Storage OpenStack Deployment with ... com/dell/ Find out more about Ceph Block ... Install using Crowbar ...
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar ... The Ceph team worked with Dell to create a Ceph barclamp (a crowbar ...
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar. ... Overview of Dell Crowbar and ... Writing the OpenStack Operations Guide in 5 Days ...
Ceph; Original author(s) ... feature the 'ceph-deploy' deployment tool in favor of the previous 'mkcephfs' method of deployment. ... (Ceph Day Frankfurt ...