اگر در شبکه خود سرور قدیمی با تعداد هارد زیاد دارید برای استفاده از آن می توانید از نرم افزارهای شبیه ساز storage مانند openfiler , freeNAS و مواردی از این دست استفاده نمائید. به مرور انواع شبیه سازهای storage آموزش داده می شود ولی درابتدا با openfiler شروع خواهیم کرد
نحوه نصب در ذیل به ترتیب آمده است:
آموزش نصب شبیه ساز openfiler در ذخیره سازی
مرحله یک
مرحله دو
مرحله سه
مرحله چهار
مرحله پنجم
Username = openfiler و پسورد برابر کلمه password است. سپس باید به کارت شبکه هایش IP داد پس در خط فرمان که شکل آن در مرحله چهار نشان داده شده است عبارت زیر را وارد میکنیم:
ifconfig eth0 x.x.x.x netmask x.x.x.x up
که netmask همان subnet mask میباشد.
در مقاله بعد تنظیمات مربوط به openfiler بیان می گردد.باماهمراه باشید
مفهوم object level storage در سال ۱۹۹۶ توسط دانشگاه Carnegie Mellon در آزمایشگاه Paraller DATA مطرح گردید. مفهوم دیگری که در آن زمان مطرح شد این بود که بتوانیم با دیتاها بصورت flexible و به شکل object آن ها را نوشت و خواند.
یکی ازمفاهیم این گونه استوریج ها جداسازی آن ها از لایه مدیرت و توصیف های دیتا از خود دیتا می باشد این امر به مدیریت بهتر بر روی داده ها بسیار کمک می کند.objstorage بطور مستقیم با سیستم عامل (os ) کار نمی کنند. این تعامل یعنی ارتباط سیستم عامل با استوریج از طریق یک API رخ می دهد. این API تمام ارتباط ها با LUN Mapping وتمام توپولوژی های استوریج های تحت شبکه را برای ما آسان می کند. به عبارت بهتر واسط کاربری ما با انواع توپولوژی های استوریج ها تحت شبکه می شود که باعث مدیریت بهتر ما بر این منابع می شود. همچنین بطور چشم گیری باعث کاهش حملات به سطح شبکه استوریج ما می شود زیرا تنها لایه ای که از پروتکل های http,https استفاده می کند ،لایه API مدیریتی است.
مزایای دیگر این گونه استوریج ها این است که دیتا بصورت کاملا FLAT و هموار تشکیل می شود و از ساختار سلسله مراتبی و درختی در این گونه استوریج ها خبری نیست که این باعث مقیاس پذیری وسیعتری آن ها می شودکه به این خاصیت Scabilityمی گویند.
خاصیت دیگر آنها که به Durability معروف هستند با عث می شود که از هر دیتا ۳ بار کپی گردد که این داده ها در سیستم های مختلف پخش می شوند به عبارت بهتر هر دیتا به ساختار های کوچکتر تقسیم می شود و در بین سیستم های مختلف پخش می شود به این گونه سیستم ها distribute system گویند .پس عمل Repilacation نیز خود به خود و به صورت اتوماتیک در این گونه سیستم ها رخ می دهد.
از آنحایی که بسیاری از سیستم عامل ها بطور مستقیم با اعمال تغیرات اضافی و مداوم بر روی فایل های یک استوریج (مانند مسدود کردن سطح دسترسی و…) می توانند باعث آسیب رساندن به هارد دیسک شوند ،لایه مدیریتی که با API کار می کند می تواند از بروز چنین آسیب هایی جلوگیری کند و باعث صرفه جویی در هزینه های سازمان گردد.
نکته ای که باید به آن توجه داشت object storage ها برای Relational Database مناسب نیستند.
ZFS چیست؟
سیستم فایل ZFS یک سیستم فایل جدید می باشد که انقلابی در فایل سیستم های ذخیزه سازی به وجود آورده و بطور اساسی این نوع سیستم فایل را تغییر می دهد و ویژگی ها و مزایایی که در هیچ یک از سیستم های دیگری که امروزه موجود نیستند را تغییر خواهد داد. ZFS سیستم بسیار قدرتمند، مقیاس پذیر و مدیریت آن بسیار ساده می باشد.در مقاله فایل سیستم ZFS و قابلیتهای منحصر به فرد آن سعی بر این شده تا تمامی مزایای آن در ینک نگاه بیان شود
مفهوم ZFS Pool
ZFS از مفهوم Storage Pool جهت مدیریت فیزیکی Storage استفاده میکند در ابتدا سیستم عامل جهت استفاده از یک سخت افزار بر روی تمامی دیسک امکان نوشتن را مهیا میکرد و در ادامه پشتیبانی از چند سخت افزار را جهت افزایش قابلیت Data Redundancy را به سیستم اضافه کرد. در ادامه، مفهوم Volume Manager جهت کنترل چندین دیوایس به کار گرفته شد و بعد از آن نیازی به کنترل چندین سخت افزار مجزا را حذف میکرد و مدیریت واحد را به ازای چند سخت افزار را در پنل کاربری واحدی به نمایش میگذاشت.
ZFS مدیریت Volume ها را کاملا حذف می کند. به جای اینکه مجبور شوید Volume مجازی سازی بسازید، ZFS دستگاه ها را به یک پایگاه ذخیره سازی متصل می کند. pool ذخیره سازی مشخصات فیزیکی ذخیره سازی (توانمندی دستگاه، افزونگی داده ها و غیره) را توصیف می کند و به عنوان یک پک در اختیارفایل سیستم ZFS قرار داده میشود. در این بین فایل سیستم به یک دستگاه خاص محدود نمیشود و به همه دستگاهها اجازه استفاده از فایل سیستم خود را به عنوان pool واحد خواهد داد، به همین طریق آنها می توانند فضای دیسک را با تمام سیستم های فایل در Pool به اشتراک بگذارند. شما دیگر نیازی به تعیین اندازه فایل سیستم ندارید، زیرا سیستم های فایل به طور خودکار در فضای دیسک اختصاص یافته به Pool ذخیره سازی اضافه خواهد شد. هنگامی که دیسک جدید اضافه می شود، تمام سیستم های فایل درون Pool می توانند بلافاصله از فضای دیسک اضافی بدون کار اضافی استفاده کنند. به طرق مختلف، پایگاه ذخیره سازی به طور مشابه به مانند یک سیستم عامل عمل میکند هنگامی که یک Ram حافظه به یک کامپیوتر اضافه می شود، سیستم تمامی فرایندهای افزودن Ram را به صورت خودکار انجام میدهد و در راه اندازی مجدد به همین سادگی رم شما نیز افزایش خواهد یافت.
عملکر فایل سیستم
ZFS یک سیستم فایل پیوندی است به این معنی که وضعیت سیستم فایل همواره در دیسک سازگار است. سیستم های سنتی فایل داده ها را جایگزین می کنند، به این معنی که اگر سیستم فایلی را از دست بدهد، مثلا بین زمانی که یک بلوک داده اختصاص داده می شود و هنگامی که آن را به یک پوشه متصل می شود، سیستم فایل در یک حالت متناقض باقی می ماند. از لحاظ تاریخی، این مشکل از طریق استفاده از فرمان fsck حل شده است. این دستور وظیفه بازبینی و تایید وضعیت سیستم فایل می باشد و تلاش برای رفع هر گونه تناقض در طول فرآیند این مشکل از سیستم فایل های متناقض باعث ایجاد مشکلات زیادی شده است و فرمان fsck هرگز تضمین نمیکند که همه مشکلات ممکن را حل کند.
با استفاده از سیستم فایل پردازشی، داده ها با استفاده از کپی نوشته می شود. داده ها هرگز رونویسی نمی شوند و هر دنباله ای از عملیات کاملا استفاده یا کاملا نادیده گرفته می شود. بنابراین، فایل سیستم ZFS هرگز نمی تواند از طریق از دست رفتن برق و یا کرش سیستم خراب شود. اگر چه جدید ترین داده ها نوشته های شده ممکن است از دست رفته باشند ولی سیستم فایل همیشه هماهنگ خواهد بود. بنابرین داده ها همیشه سالم خواهند بود.
بازبینی سلامت داده ها
با ZFS، تمام داده ها و ابرداده ها با استفاده از یک الگوریتم قابل انتخاب توسط کاربر تایید می شود. سیستم فایل سنتی که تأیید checksum verification را ارائه می دهد، بر اساس هر بلوک، بدون توجه به لایه مدیریت حجم و طراحی سنتی سیستم فایل انجام شده است. طراحی سنتی بدین معنی است که برخی از خرابی ها، مانند نوشتن یک بلوک کامل به مکان نادرست، می تواند داده هایی را که نادرست ولی دارای خطاهای کنترل نشده باشد، منجر شود. checksums های ZFS به گونه ای ذخیره می شوند که این خرابی ها شناسایی میشوند و می توانند با ظرافت بازیابی شوند. کلیه تاییدیه و بازیابینی اطلاعات در لایه سیستم فایل ZFS انجام می شود و برای برنامه های کاربردی کاملا شفاف خواهد بود.
علاوه بر این، ZFS اطلاعات مربوط به سلامت داده ها را فراهم می کند. ZFS از مخازن ذخیره سازی با سطوح مختلف بارگیری اطلاعات پشتیبانی می کند. هنگامی که یک بلوک داده خراب، شناسایی می شود، ZFS اطلاعات صحیح را از یک کپی دیگر بدست می آورد و اطلاعات نامناسب را تعمیر می کند و آن را با اطلاعات صحیح جایگزین می کند.
مقیاس پذیری بی نظیر
عنصر اصلی طراحی سیستم فایل ZFS مقیاس پذیری است. سیستم فایل خود 128 بیتی است و اجازه می دهد 256 quadrillion zettabytes از ذخیره سازی. تمام ابرداده ها به صورت پویا اختصاص داده می شوند، بنابراین نیازی نیست که قبل از اختصاص دادن یا مقیاس پذیری سیستم فایل ابتدا ایجاد شود. تمام الگوریتم ها با مقیاس پذیری در ذهن نوشته شده اند. راهنماها می توانند بی نهایت ورودی داشته باشند، و هیچ محدودیتی بر تعداد سیستم های فایل یا تعداد فایل هایی که می توانند در یک سیستم فایل قرار گیرند وجود ندارد.
ZFS Snapshot
snapshot یک کپی فقط خواندنی از یک سیستم فایل یا داده است. Snapshots می تواند به سرعت و به آسانی ایجاد شود. در ابتدا، Snapshot هیچ فضای دیسک اضافی را در داخل pool ذخیره نمی کنند. همانطور که داده ها در مجموعه داده های فعال تغییر می کند، Snapshot با ادامه به ارجاع داده های قدیمی، فضای دیسک را مصرف می کند. در نتیجه، Snapshot ، از دسترسی داده ها به Pool جلوگیری می کند.
مدیریت آسان
مهمتر از همه، ZFS مدل مدیریت بسیار ساده ای را ارائه می دهد. ZFS از طریق استفاده از یک سیستم فایل سلسله مراتبی، ارث بری و مدیریت اتوماتیک به اشتراک گذاری NFS، ایجاد و مدیریت سیستم های فایل را بدون نیاز به دستورات متعدد و یا فایل های پیکربندی ویرایش آسان می کند. شما می توانید به راحتی تنظیم و یا زمانبندی، فشرده سازی را فعال یا غیر فعال کنید. و در هر لحظه امکان گرفتن Snapshot میسر می باشد.
از اینکه این مقاله (فایل سیستم ZFS و قابلیتهای منحصر به فرد آن) را مطالعه کردید تشکر میکنیم و از شما خواهش میکنیم که آنرا با دوستان خود به اشتراک بگذارید.
دستگاه های ذخیره ساز EMC VNX از سال 2011 با دارابودن بازدهی مناسب و حفاظت از داده ها در سطح مناسب وارد بازار شدند. از مهمترین ویژگی های EMC VNX وجود نرم افزارهای قدرتمندی است که ذخیره ساز را در برابر معضل از دست رفتن اطلاعات محافظت می نماید. این نرم افزار ها که به صورت Locally و Remotely سیستم را در برابر هر گونه مشکلات غیر قابل پیش بینی محافظت می نماید بازدهی سیستم را افزایش داده و می توانند بصورت کامل، محل قرار گرفتن اطلاعات بر روی دیسک های پر سرعت ، پر ظرفیت و SSD را تشخیص داده و مدیریت نمایند. در مقابل شرکت HP اولین ذخیره ساز HP 3Par در سال 2002 تولید شد و این شرکت پیشگام و برنده در زمینه Thin Provisioning لقب گرفته است که این تکنولوژی اولین بار در سال 2002 تولید شد و در سال 2003 نیز به مشتریان ارائه شد. در سال 2007 این شرکت شعبه تحقیق و توسعه خود را در ایرلند شمالی افتتاح نمود و باعث گردید در همان سال قابلیت Virtual Domain معرفی شود. این قابلیت باعث افزایش امنیت ذخیره سازی اطلاعات برای شرکت هایی باشد که Storage کرایه می دهند مهمترین استفاده آن امروزه در Clouding می باشد.
همانطور که در متن مطلب آورده شده است در 3PAR از پردازنده ASICبرای اینکار استفاده می شود در حالی که در VNX ها بوسیله نرم افزار اینکار انجام می شود که باعث می شود بار بیشتری بر روی پردازنده اصلی قرار بگیرد.
بهینه سازی RAID
در 3PAR هارد دیسک ها به قسمتهایی بنام Chunklet تقسیم می شود و سپس اطلاعات بر روی Chunklet ها نوشته می شود و باعث می شود در صورت بروز مشکل، عمل Rebuild با سرعت بالاتری انجام پذیرد در صورتیکه در EMC همچنین از Pool و Raid group استفاده می شود.
کارایی سیستم
در EMC مدل VNX 5700 طبق تست های انجام شده مقدار IOPS بیشتر از 75000 در هر ثانیه تاکنون گزارش نشده است در صورتیکه در مدل 7000 دستگاه 3PAR تعداد IOPS گزارش شده 300000 عمل خواند و نوشتن در هر ثانیه می باشد.
تعداد کنترلرها
شرکت EMC همچنان از دو کنترلر برای دستگاهها استفاده می کند در صورتیکه در سری 7000 دستگاه 3PAR می توان از 4 کنترلر استفاده نمود. همچنین با استفاده از قابلیت cache Persistent هیچ زمانی Write-Through فعال نمی شود در صورتیکه دستگاه EMC از این قابلیت استفاده نمی کند.
مدیریت
تخصیص فضا و مدیریت دستگاه EMC بسیار مشکل می باشد در صورتیکه در 3PAR به ساده گی و در کمترین زمان ممکن می توان دستگاه را پیکربندی نمود
End-To-End Solution
EMC برای پردازش اطلاعات و شبکه راه حلی ارائه نداده است ولی در HP به غیر از دستگاههای ذخیره ساز، سرور و تجهیزات شبکه نیز به فروش می رسد و باعث می شود دیتا سنتر یکپارچگی خود را حفظ کند.
برای اینکه بهتر فرق های این دو Storage را متوجه شویم می توان سوال های زیر را از EMC پرسید و جواب ارائه شده را به دقت بررسی نمود.
چرابرای Snapshot و remote copy در دستگاه EMC نیاز به در اختیار داشتن فضا از ابتدا وجود دارد؟
چرا باید بصورت اختصاصی یک Spare Drive در EMC در نظر گرفته شود در حالیکه در 3PAR پخش می باشد؟
اگر یک کنترلر در VNX دچار مشکل گردد چه اتفاقی برای اطلاعات و عملکرد cache رخ می دهد؟
اگر یک Enclosure دستگاه VNX دچار مشکل شود چه اتفاقی رخ خواهد داد؟
در انتها لازم بذکر است همانطور که ذکر شد این دستگاه دارای مزیت های پرشماری می باشد و در کنار آن نیز معایبی داشته که تقریبا تمامی آن برطرف شده است به عنوان مثال یکی از نقاط ضعف این دستگاه این بوده است که هارد دیسک های SAS را Support نمی کرد که امروزه این مشکل برطرف شده است.
شرکت اچ پی حرفه ای ترین استوریج All-Flash دنیا را معرفی کرد
شرکت اچ پی یک سری از استوریج های جدید خانواده 3PAR StoreServ خود را معرفی کرد.
Today HP made a series of announcements around its 3PAR StoreServ Storage family. These announcements include innovations that aid in transitioning IT from hybrid to all-flash data centers. HP announced a new 3PAR StoreServ Storage 8000 family, which it states is the industry’s most affordable all-flash array (AFA). It also announced a new 20800 AFA starter kit and software updates for its 3PAR StoreServ Storage line.
Today HP made a series of announcements around its 3PAR StoreServ Storage family. These announcements include innovations that aid in transitioning IT from hybrid to all-flash data centers. HP announced a new 3PAR StoreServ Storage 8000 family, which it states is the industry’s most affordable all-flash array (AFA). It also announced a new 20800 AFA starter kit and software updates for its 3PAR StoreServ Storage line.
Storage family. These announcements include innovations that aid in transitioning IT from hybrid to all-flash data centers. HP announced a new 3PAR StoreServ Storage 8000 family, which it states is the industry’s most affordable all-flash array (AFA). It also announced a new 20800 AFA starter kit and software updates for its 3PAR StoreServ Storage line.
Today HP made a series of announcements around its 3PAR StoreServ Storage family. These announcements include innovations that aid in transitioning IT from hybrid to all-flash data centers. HP announced a new 3PAR StoreServ Storage 8000 family, which it states is the industry’s most affordable all-flash array (AFA). It also announced a new 20800 AFA starter kit and software updates for its 3PAR StoreServ Storage line.
Storage family. These announcements include innovations that aid in transitioning IT from hybrid to all-flash data centers. HP announced a new 3PAR StoreServ Storage 8000 family, which it states is the industry’s most affordable all-flash array (AFA). It also announced a new 20800 AFA starter kit and software updates for its 3PAR StoreServ Storage line.
Flash storage is continuing to grow at an impressive rate. HP is making one of the biggest splashes in the flash market. Based on 2014 revenues, HP is the fastest growing flash vendor and is number two in market share for AFA. HP recently announced their high-density 3PAR StoreServ Storage 20000 family, which brought flash price as low as $1.50/GB. HP plans on continuing this growth with its new quad-controller HP 3PAR StoreServ 8000 Storage family and its new eight-controller HP 3PAR StoreServ 20800 AFA Starter Kit.
The new HP 3PAR StoreServ 8000 Storage family is being billed as the industry’s most affordable and automated AFA. The 8000 is a quad-node design that starts around $19,000 for the all-flash version and can reach 3.2 million IOPS. The 8000 can deliver up to 5.5PB in a single floor tile and also comes in a converged model that supports spinning discs. The 8000 shares the same hardware acceleration as the eight-node enterprise-flash 20000 family. It also features the HP 3PAR Gen5 Thin Express ASIC and twice the bandwidth of competing platforms, over 20GB/s of read bandwidth.
The new HP 3PAR StoreServ 20800 AFA Starter Kit starts at $99,000 putting this highly scalable AFA within the reach of even more enterprises and service providers. HP is also adding the 20450 to its 3PAR StoreServ Storage 20000 family. The 20450 can scale up to 6PB with 1.8 million IOPS.
Built-in storage federation capabilities allow 8000 and 20000 models to be pooled together for up to 60 PB of aggregate usable capacity in a four-system federation with non-disruptive workload mobility across systems with just a single mouse click. Both the 8000 and 20000 flash arrays are now certified for SAP HANA Tailored Data Center Integration (TDI).
Along with the new arrays, HP announced enhancements the software for its entire HP 3PAR StoreServ Storage family including SAN infrastructure. HP’s 3PAR Priority Optimization software has been updated to allow users to specify latency goals as low as 0.5ms. HP also enhanced data protection with StoreOnce Recovery Manager Central for VMware (RMC-V). RMC-V is said to deliver 17 times faster VM protection by taking application-consistent snapshots on the HP 3PAR StoreServ array, then automatically copying changed blocks directly to any HP StoreOnce appliance. RMC-V supports VMware vSphere 6.0 with VMware Virtual Volumes and more granular recovery of individual VMs and files to simplify data recovery.
HP also released HP SmartSAN for HP 3PAR StoreServ for customer that deploy flash over Fibre Channel. This will orchestrate SAN fabric zoning autonomically drastically reducing the steps required to provision a SAN. HP has made enhancements to reduce iSCSI latency and add support for iSCSI VLAN tagging for service providers that leverage Ethernet networking.
Availability and pricing
All-flash HP 3PAR StoreServ 8000 Storage systems are available now starting from $19,500.
The 20800 All-Flash Starter Kit with 2 controllers, 8 x 480GB cMLC SSD drives, and 3 years of Proactive Care 24×7 support, will be available to order in September 2015 starting at $99,995.
The 20450 All-Flash Arrays are available now starting from $85,167.
HP 3PAR Priority Optimization is available worldwide as part of the HP 3PAR Data Optimization Suite starting at $1,210.
HP Smart SAN 1.0 is available now, licensed on a per-system basis, starting at $200 for HP 3PAR StoreServ 7000 and 8000 models.
RMC-V 1.2 will be available in October 2015 and is licensed per-array, starting at US $2,500.
In August of 2017, we posted our review of the NetApp A200 all flash array. We really enjoyed the performance and feature set; ultimately it earned one of only five Editor’s Choice Awards we gave out in 2017. It was with much excitement then that we obtained the next system from NetApp for review. The A300 was launched in the fall of 2016, and firmly targets the midrange storage customer. This isn’t entirely different than the A200’s target; the A300 just adds more performance and scalability oomph over its smaller cousin. The A300 of course runs the latest version of ONTAP and supports SSDs up to 30TB and is just as easy as the A200 to set up.
Architectually the units are a little different. While the A200 chassis combines drives and controllers in one 2U package, the A300 has a dedicated set of controllers in a 3U chassis and the drives are added as shelves (12Gb/s SAS). The A300 requires just 12 SSDs to start but scales to over 140PB raw (560PB effective) in NAS config and 70PB raw (280PB effective) as SAN. NetApp supports 10GbE, 40GbE as well as Fibre Channel up to 32Gb and NVMe/FC with the 32Gb FC adapter.
Our unit under review is configured with one DS224C shelf loaded with 24 960GB SSDs. Primary connectivity is eight 32Gb FC ports, through 2 dual-port cards in each controller. The A300 was running ONTAP version 9.4 at the time of the review.
NetApp AFF A300 Specifications
Per HA Pair (active-active controller)
Form Factor
3U
Memory
256GB
NVRAM
16GB
Storage
Maximum SSD
384
Maximum Raw Capacity
11.7PB
Effective Capacity
46.9PB (base10)
SSDs Supported
30.2TB, 15.3TB, 7.6TB, 3.8TB, and 960GB. 3.8TB, and 800GB self-encrypting
Windows 2000Windows Server 2003Windows Server 2008Windows Server 2012Windows Server 2016LinuxOracle SolarisAIXHP-UXMac OSVMwareESX
Ports
8x UTA2 (16Gb FC, 10GbE/FCoE)4 x 10GbE4 x 10GbE BaseT8x 12Gb SAS4x Slots For more portsStorage Networking supported:NVMe/FCFCFCoEiSCSINFSpNFSCIFS/SMB
OS version
ONTAP 9.1 RC2 or later
Max number of LUNs
4,096
Number of supported SAN hosts
512
Design and Build
The NetApp AFF A300 looks more or less like a slightly taller version of the A200. The bezel is silver and mainly designed for ventilation. NetApp branding is on the left side. Also on the left are the status LED lights. Across the front, we see the storage shelfs for inserting 2.5″ drives.
The rear of the device has redundant hot-swappable PSUs on either end, with hot-swappable fans as well. On the right side, next to the PSU, are four PCIe slots that allow for connections such as 40GbE and 32Gb FC, our model is loaded with four 32Gb FC cards. On the left, it is easy to see both controllers (one on top of the other). Here is where the SAS ports, as well as networking, and management ports are located.
Performance
For performance we will be comparing the A300 to the A200. Again this isn’t necessarily which one will perform better (the more powerful array, the A300, will win out). This is to show potential user what to expect given their performance and storage needs. In comparison of both NetApp models, we have full data reduction capabilities enabled, showing real-world performance. As we’ve noted in our previous A200 review, NetApp data reduction services have had a minimal impact towards performance.
The configuration of our NetApp AFF A300 included 8 32Gb FC ports as well as one 24-bay disk shelf. Out of the 24 960GB SSDs deployed in our A300, we split that up into two RAID-DP Aggregates consisting of each SSDs partitioned in half. While the drive count is the same as the previously reviewed A200, the A200 was completely topped out with CPU utilization. The A300 and subsequently higher models in the NetApp portfolio are each geared for deployments requiring more and more I/O and bandwidth.
The environment used to test the NetApp AFF A300 in our synthetic benchmarks consists of eight Dell EMC R740xd PowerEdge servers, each with a dual-port 16Gb FC HBA and a dual-switch FC fabric running on Brocade G620 switches.
Application Workload Analysis
The application workload benchmarks for the NetApp AFF A300 consist of the MySQL OLTP performance via SysBench and Microsoft SQL Server OLTP performance with a simulated TPC-C workload.
Testing was performed over FC using four 16Gb links, with two connections per controller.
SQL Server Performance
Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test is looking for latency performance.
This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Quest’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across the A300 (two VMs per controller).
SQL Server Testing Configuration (per VM)
Windows Server 2012 R2
Storage Footprint: 600GB allocated, 500GB used
SQL Server 2014
Database Size: 1,500 scale
Virtual Client Load: 15,000
RAM Buffer: 48GB
Test Length: 3 hours
2.5 hours preconditioning
30 minutes sample period
SQL Server OLTP Benchmark Factory LoadGen Equipment
Looking at transactional performance of the NetApp A300 had an aggregate score of 12,628.7 TPS with individual VMs ranging from 3,155.751 TPS to 3,158.52 TPS. This gives it fairly similar performance to the A200 that had an aggregate score of 12,583.8 TPS as both are running to a set limit. A better understanding of performance, and performance improvement, come from latency.
For average latency, the A300 had an aggregate score of 8ms, much faster than the A200’s 25ms. Individual VMs ranged from 6ms to 10ms.
Sysbench Performance
Each Sysbench VM is configured with three vDisks, one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller. Load gen systems are Dell R730 servers; we range from four to eight in this review, scaling servers per 4VM group.
Dell PowerEdge R730 Virtualized MySQL 4-5 node Cluster
8-10 Intel E5-2690 v3 CPUs for 249GHz in cluster (Two per node, 2.6GHz, 12-cores, 30MB Cache)
1-1.25TB RAM (256GB per node, 16GB x 16 DDR4, 128GB per CPU)
For Sysbench, we tested several sets of VMs including 8, 16, and 32, and we ran Sysbench with both the data reduction “On” and in the “Raw” form. For transactional performance, the NetApp A300 was able to hit 13,347 TPS for 8VM, 18,125 TPS for 16VM, and 22313 TPS for 32VM marking a 5,041 TPS and a 9,727 TPS improvment over the A200.
Sysbench average latency saw the A300 hit 19.18ms, 28.27ms, and 46.04ms for 8VM, 16VM, and 32VM, again a dramatic improvement over the A200.
For our worst-case scenario latency the A300 was able to hit just 42.97ms for 8VM, 68.82ms for 16VM, and 109.66ms for 32VM, a marked improvement over the A200’s 8VM and 16VM scores.
VDBench Workload Analysis
When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. On the array side, we use our cluster of Dell PowerEdge R740xd servers:
Profiles:
4K Random Read: 100% Read, 128 threads, 0-120% iorate
4K Random Write: 100% Write, 64 threads, 0-120% iorate
Starting with peak random 4K read performance, the A300 had a much stronger showing going to 450K IOPS before popping over 1ms and peaking at 635,342 IOPS with a latency of 6.4ms. Compared to the A200’s sub-millisecond latency up to about 195K IOPS and a peak score of about 249K IOPS with a latency of 14ms.
For peak 4K random write performance, the A300 made it to roughly 140K IOPS at sub-millisecond latency and went on to peak at 208,820 IOPS with a latency 9.72ms. This was a marked improvement over the A200 that had sub-millisecond latency performance until about 45K IOPS and a peak of roughly 85K IOPS at 19.6ms.
Switching over to sequential workloads, we look at peak 64K read performance, here the A300 made it to roughly 80K IOPS or 5GB/s before breaking sub-millisecond latency performance. The A300 peaked at about 84,766K IOPS or 5.71GB/s with 3.64ms latency before dropping off a bit compared to the A200’s peak of 60K IOPS or 3.75GB/s with a latency of 8.5ms.
With 64K sequential write we saw another huge jump in performance between the two models. The A300 had sub-millisecond latency until about 31K IOPS or 1.91GB/s, versus the A200 at 6K or about 500MB/s. For peak performance we saw the A300 hit 48,883 IOPS or 3.1GB/s at a latency of 4.8ms versus the A200’s 19.7K IOPS or 1.22GB/s at a latency of 12.85ms.
Next up is our SQL workload benchmarks. The A300 made it over 430K IOPS before breaking 1ms in latency. At its peak, the A300 was able to hit 488,488 IOPS with a latency of 2.1ms, compared to the A200’s 179K IOPS and 5.7ms latency.
For SQL 90-10 the A300 made it around 330K IOPS with sub-millisecond latency and peaked at 416,370 IOPS with a latency of 2.46ms. This is over four times the performance of the A200 (90K IOPS) with less than half the latency (6.5ms).
The SQL 80-20 saw the A300 again make it to roughly 250K IOPS at less than 1ms before peaking at 360,642 IOPS with 2.82ms latency. This put it over 150K IOPS higher performance and half the latency of the A200.
Moving on to our Oracle workloads, we see the A300 hit about 240K IOPS with sub-millisecond latency and the array peaked at 340,391 IOPS with a latency of 3.6ms. Again this is leaps and bounds over the A200 model that peaked at 125K IOPS with a latency of 10.2ms.
With the Oracle 90-10 it was more of the same: the A300 had sub-millisecond latency until over 375K IOPS and peaked at 417,869 IOPS with a latency of 1.53ms. For perspective, the A200 broke 1ms at about 100K IOPS and peaked at 155K IOPS with a latency of 4.2ms.
For Oracle 80-20, we saw sub-millisecond latency until roughly 285K IOPS and a peak performance of 362,499 IOPS and a latency of 1.62ms. Again this showed over twice the performance and less than half the latency of the A200.
Next we switched over to our VDI Clone Test, Full and Linked. For VDI Full Clone Boot the A300 stayed under 1ms until about 225K IOPS and peaked at 300,128 IOPS with a latency of 3.46ms. This was a tremendous performance leap over the A200’s peak of 122K IOPS and latency of 8.6ms.
With the VDI Full Clone Initial Login, the A300 made it to 75K IOPS before going over 1ms and went on to peak at 123,984 IOPS with a latency of 7.26ms. The sub-millisecond latency performance of the A300 was better than the peak performance of the A200, 48K IOPS with a latency of 18.6ms.
VDI FC Monday Login showed another huge bump in performance with the A300 making it to roughly 80K IOPS under 1ms and peaking at 131,628 IOPS or 2.2GB/s with a latency of 3.89ms. This is compared to the A200’s peak performance of 49K IOPS with 10.4ms for latency.
Switching over the VDI Linked Clone (LC), the A300 had sub-millisecond latency performance over 175K IOPS and peaked at 215,621 IOPS with a latency of 2.28ms for the boot test. For comparison, the A200 peaked at 95.k IOPS with a latency of 5.13ms.
In a large difference of performance, the VDI LC Initial Login had the A300 peak at 95,296 IOPS with a latency of 2.68ms versus the A200’s peak of 37K IOPS at 6.95ms.
Finally we look at VDI LC Monday Login where the A300 had sub-millisecond latency up until 60K IOPS and peaked at 94,722 IOPS or 2.3GB/s with a latency of 5.4ms. the A200 had sub-millisecond latency until 17K IOPS and peaked at about 37k IOPS and 13.3ms latency.
Conclusion
NetApp released the impressive A200 all-flash array last year that earned one of our Editor’s Choice awards. The release of the more powerful NetApp AFF A300 doesn’t represent a replacement for the A200, it is a more powerful AFA for users that need additional capacity and performance. The A300 is a 3U form factor for the dual active-active controller setup, plus disk shelves. The A300 can pack quite a bit more capacity than it’s smaller cousin: 140PB raw (560PB effective) in NAS and 70PB raw (280PB effective) as SAN. The A300 supports networking up to 40GbE and FC 32Gb.
For Application Analysis we ran SQL Server and Sysbench on both the A200 and A300 with data reduction (DR) on. For the transactional performance on SQL we saw the A300 hit an aggregate score of 12,628.7 TPS an increase from the A200’s 12,583.8 TPS. With SQL Server average latency we saw a bigger increase with the A300 having an aggregate latency of 8ms compared to the A200’s 25ms. With Sysbench we tested sets of 8, 16, and 32 VMs with the A300 seeing TPS of 13,347, 18,125, 22,313 and average latency of 19.18ms, 28.27ms, and 46.04ms and worst-case scenario latency of 42.97ms, 68.82ms, and 109.66ms respectively.
For synthetic performance we tested the A300 with VDBench positioned against the A200 as a reference point. To once again note, the comparison of the A300 to the A200 is less about which one is better (the A300 is more powerful and will beat the A200 in performance in all tests), and more about what users can expect and how to choose for their given needs. The A300 put up some impressive numbers highlights include random 4K peak performances of 635K IOPS read and nearly 209K IOPS write. For 64K sequential, the array hit 5.71GB/s read and 3.1GB/s write. For our SQL benchmarks the A300 was able to get close to 490K IOPS, 416K IOPS for SQL 90-10, 361K IOPS for SQL 80-20. The Oracle results being around 340K IOPS, 418K IOPS for Oracle 90-10, and 362K IOPS for Oracle80-20.
In our reviews, we rarely compare units against each other, but in this case, the A200 to A300 comparison is appropriate if for nothing else to confirm what NetApp claims about the performance jump between the two systems. Where the A200 (and subsequently the A220) is great for smaller operations or perhaps even some ROBO scenarios, the A300 takes a big jump forward in terms of overall performance capabilities and is suitable for larger organizations with a lot of mixed workloads or perhaps for someone like a regional managed services provider. In the end, the A300 is quite similar to the A200, it’s just more in terms of scalability, IO port flexibility and overall performance. The NetApp A300 continues on where the A200 leaves off, making it another favorite in our lab and ultimately another great execution for NetApp’s ONTAP storage portfolio.
Seagate announced it has made the first formatted and fully functioning 16TB hard disk drive in a standard 3.5-inch form factor, which makes them the producer of the highest capacity hard disk drives yet. The drive utilizes Seagate’s heat-assisted magnetic recording (HAMR) process, and Seagate expects to make drives with even higher capacities soon. They expect to have a 20TB drive in 2020.
We last talked extensively about HAMR drives in 2015, so its probably worth reviewing the technology. A major limitation on HDD capacity is the minimum size of the magnetic fields that can be created and used to write data. HAMR gets around this limit by temporarily, very temporarily (we’re talking durations under a single nanosecond here) heating the area to be written to make it more receptive to magnetic effects. The difficulties in both generating enough heat (above 400 °C) and focusing it precisely enough are obvious. Seagate’s solution is to use a small laser diode attached to each recording head to heat the target location.
The new drives will still be delivered in the same form factor as current HDD. Once they’re released, they could potentially be swapped out with your current drives and provide a significant boost in storage capacity. Every time they’ve discussed the new drives Seagate has been quick to say that HAMR HDDs are at least as reliable as current technologies.The company’s HAMR technology set a record in reliability demonstrating that a single head can transfer 3.2PB over a five-year period, twenty times the amount required for the industry’s nearline specification.
I wish I could say the new HAMR drives will be available next year. Unfortunately, the entire current run of over 40,000 drives is being used to run the tests customers commonly use when integrating hard drives into enterprise applications. So far the results we’ve heard about are very good, but there have been several schedule slips over the several year development cycle. Last year, 20TB drives were expected in 2019, but now Seagate is predicting 2020, so I wouldn’t be surprised if we don’t see commercial drives before then. This shouldn’t be taken to reflect poorly on Seagate. That they’ve progressed as far as they have as fast as they have is truly astounding.
The Dell EMC VxRail family of appliances are hyper-converged infrastructure (HCI) underpinned by VMware vSAN. VxRail has long been the lead product when VMware talks about vSAN, as the concept of deploying and managing the VxRail appliance is appealing to many. Of course, Dell EMC and others sell vSAN Ready Nodes for those who want a little more control over server configuration. VxRail isn’t new to the lab as we wrote about the features like streamlined deployment and rigorous compatibility testing that Dell EMC brings to the table in early 2017. Much has changed since then; primarily Dell EMC has migrated off whitebox servers to PowerEdge servers. This is not an insignificant change, primarily because PowerEdge servers bring additional management and reliability features to the table that VxRail appliances can benefit from, further strengthening the Dell EMC/VMware value proposition when discussing the benefits of the VxRail appliance vs. Ready Nodes or roll your own options.
In this review we’re looking at a typical four-node configuration of Dell EMC VxRail P570F appliances. Dell EMC offers quite a few configurations of VxRail, and the nomenclature can get a little cumbersome. The P Series units are typically more performance-oriented and are based on single node 2U PowerEdge R740xd servers. There are dozens of configuration options available including single or dual processor systems, SATA, SAS and NVMe (for cache) drive support, RAM configurations up to 3TB per node and networking up to 25GbE. The VxRail P570 is a hybrid configuration whereas the P570F is the all-flash variant we deployed for this review.
Our version of the P570F appliances under review include dual Intel 6132 CPUs (14-core 2.6Ghz), 384GB RAM, six 3.84TB read intensive SSDs for capacity and two 800GB write intensive SSDs for the cache. Each node has two disk groups, one cache drive backed by three of the capacity SSDs. Connectivity between nodes is handled via Intel X710 10GbE cards.
Dell EMC VxRail P570F Specifications
Form Factor
2U
CPU
Intel Xeon Scalable Processors
CPU Sockets
Single or dual
CPU cores
8-56
CPU frequency
1.7GHz-3.6GHz
RAM
64GB-3,072GB
Cache SSD
400GB-1.6TB SAS 800GB-1.6TB NVMe
Flash Storage
1.92TB-76.8TB SAS or SATA
Drive Bays
24 x 2.5”
Max disk groups
4
Max nodes per cluster
64
Min nodes per cluster
3
Ports
2×25 GbE SFP28 or 4×10 GbE RJ45 or 4×10 GbE SFP+ or 4×1 GbE RJ45 1x1GbE iDRAC9
Optional ports
Up to 16×10 GbE RJ45 or Up to 16×10 GbE SFP+ or Up to 8×25 GbE SFP28
Power
Dual Redundant PSU
1100W 100V – 240V AC 1100W -48V DC 1600W 200V – 240V AC
Cooling fans
4 or 6
Temperature
Operating
10°C to 30°C (50°F to 86°F)
Non-operating
-40°C to +65°C (-40°F to +149°F)
Relative Humidity
10% to 80% (non-condensing)
Physical dimensions
86.8mm/3.42in H 434mm/17.09in W 678.8mm/26.72in D 28.1kg/61.95lb
Build and Design
The Dell EMC VxRail P570F is a 2U HCI node based off the PowerEdge R740xd that comes with one of the company’s stylish new honeycomb bezels (ours is the prior generation) as seen in this mock-up.
Beneath the bezel are the 24 2.5” drives. On the left, there are the health, ID, and status LED lights. On the right are the power button, VGA port, iDRAC micro-USD port, and two USB 2.0 ports.
Swinging around to the rear, one can easily see that there is plenty of room for expansion through cards. The bottom right has dual PSUs. In the bottom center there are four 10G SFP+ ports, and going to the left, two USB 3.0 ports, a VGA port, Serial port and an 1G RJ45 iDRAC port.
The top easily pops off for access to the CPUs and RAM, or to add on more network connectivity or storage in the rear of the device.
Management
A big selling point for VxRail is ease of HCI deployment and here the Dell EMC VxRail P570F didn’t disappoint. The VxRail quickly prepared the infrastructure, eliminating anything that it didn’t need.
Once the infrastructure is correctly set up, the VxRail will being clustering ESXi Hosts and automating storage configurations.
After the unit is deployed, admins will be brought to the main screen of VxRail Manager. Users are automatically brought to the dashboard that shows information such as overall system health, support, VxRail community, and event history. Along the left side are several tabs including: Dashboard, Support, Events, Health, and Config.
The Support tab is as it sounds. It checks the last “heartbeat” of the appliance to see if there is an issue. It allows for several support options including chatting, opening a service request, viewing the last configuration sent, downloading software such as upgrades, and letting users see what is happening in the VxRail community. There is a Knowledge Base search bar as well to find a particular issue.
The Events tab is another fairly intuitive tab as it lists the event by either ID, severity, component, or time. Clicking on an event allows users to drill down into it better for details that may resolve issues or prevent them in the future.
The Health tab lets admins see the health summary of a cluster as a whole or allows them to drill down into each appliance.
The Config tab let users see the system software on their cluster and allows for two types of upgrades: local and Internet.
As the name suggests, Local Upgrade allows users to upgrade locally; in this case, from the PC that is being used to monitor VxRail Manager.
The VxRail upgrade package includes updates for nearly all components in the server, leveraging the benefits from vertical integration with the Dell ecosystem. While some vendors will focus on the software stack, VxRail is able to update everything inside the server down to the power supply firmware, if necessary. These are the same files provided through Dell’s LifeCycle Controller, which has access to all server components. In a world where NIC firmware might be vulnerable to an attack, how many IT administrators are upgrading it as patches come out regularly? VxRail handles this in an automated fashion, making it as easy as a few clicks.
As the cluster goes through the update and upgrade process, users can view the breakdown of what is taking place. The same process repeats for each server in the cluster as needed.
When the upgrade is complete, users will have a list of everything upgraded within VxRail Manager.
While the cluster is being upgraded, you can see some of the activity up at the vCenter level. Most of this action is the individual hosts going into maintenance mode.
In total, the VxRail Manager is a great value add when it comes to hardening vSAN from a compatibility standpoint and ensuring management and maintenance are as easy as possible. The only negative is this hardening comes with a little bit of a cost, as VxRail is slower to adopt new versions of vSphere. This system runs 6.5, while 6.7 has been out for some time. Dell EMC is well aware of this, however, and continues to integrate with VMware where they can to accelerate the adoption of updates.
Each SQL Server VM is configured with two vDisks, one 100GB for boot and one 500GB for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. These tests are designed to monitor how a latency-sensitive application performs on the cluster with a moderate, but not overwhelming, compute and storage load.
SQL Server Testing Configuration (per VM)
Windows Server 2012 R2
Storage Footprint: 600GB allocated, 500GB used
SQL Server 2014
Database Size: 1,500 scale
Virtual Client Load: 15,000
RAM Buffer: 48GB
Test Length: 3 hours
2.5 hours preconditioning
30 minutes sample period
For our transactional SQL Server benchmark, the Dell EMC VxRail P570F was able to hit an aggregate score of 12,585 TPS with individual VMs running from 3,145.1 TPS to 3,148.5 TPS.
A more telling sign of SQL Server performance is latency. With SQL Server average latency, the P570F was able to hit an aggregate score of 24.4ms with individual VMs running from 21ms to 26ms.
Sysbench Performance
Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (400GB). From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller.
Sysbench Testing Configuration (per VM)
CentOS 6.3 64-bit
Storage Footprint: 1TB, 800GB used
Percona XtraDB 5.5.30-rel30.1
Database Tables: 100
Database Size: 10,000,000
Database Threads: 32
RAM Buffer: 24GB
Test Length: 12 hours
6 hours preconditioning 32 threads
1 hour 32 threads
1 hour 16 threads
1 hour 8 threads
1 hour 4 threads
1 hour 2 threads
With the Sysbench OLTP, we look at the 8VM configuration for each. The VxRail had an aggregate score of 8,645.9 TPS with individual VMs ranging from 925.48 TPS to 1,243.1 TPS.
For Sysbench average latency, the VxRail had an aggregate score of 29.9ms with individual VMs ranging from 25.7ms to 34.6ms.
In our worst-case scenario (99th percentile) latency, the VxRail had an aggregate score of 55.1ms with individual VMs ranging from 47ms to 64.4ms.
VDBench Workload Analysis
When it comes to benchmarking storage arrays, application testing is best and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices.
Profiles:
4K Random Read: 100% Read, 128 threads, 0-120% iorate
4K Random Write: 100% Write, 64 threads, 0-120% iorate
With 4K random read, the VxRail had sub-millisecond latency performance over 350K IOPS and went on to peak at 422,052 IOPS with a latency of 5.38ms.
For 4K random write, the VxRail broke 1ms early, around 17K IOPS, and rode the 1ms line until over 60K IOPS peaking at 79,801 IOPS with a latency of 5.64ms.
Next we look at sequential workloads with 64K. For Read, the VxRail had sub-millisecond latency up to about 67K IOPS or 4.1GB/s and peaked at about 80K IOPs or 4.9GB/s with a latency of roughly 4.5ms.
For 64K sequential write, the VxRail ran under 1ms until about 10K IOPS or 600MB/s and went on to peak at about 25K IOPS or 1.53GB/s with a latency of 4.9ms before dropping off some.
Next up is our SQL workloads with the Dell EMC VxRail P570F having sub-millisecond latency performance up until about 285K IOPS going on to peak at 344,619 IOPS with a latency of 2.1ms.
For SQL 90-10, the VxRail ran up to just over 215K IOPS at less than 1ms latency before going on to peak at 306,851 IOPS with a latency of 2.4ms.
SQL 80-20 saw the VxRail with sub-millisecond latency until about 209K IOPS and a peak performance of 240,468 IOPS with a latency of 2.9ms.
Following our SQL workloads is our Oracle workloads. Here the VxRail had sub-millisecond latency until about 200K IOPS, and quickly peaked at roughly 218K IOPS with 1.1ms latency before dropping off significantly in performance.
Oracle 90-10 saw the VxRail with performance under 1ms until about 250K IOPS and a peak of 302,381 IOPS with a latency of 1.7ms.
With Oracle 80-20, the VxRail had sub-millisecond performance until over 226K IOPS going on to peak at about 258K IOPS with a latency of 1.76ms.
Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone Boot, the VxRail ran with latency under 1ms until it reached about 228K IOPS and peaked at 277,332 IOPS with a latency of 3.1ms.
VDI FC Initial Login saw the VxRail shoot up over 1ms early on, going on to peak at about 78K IOPS with 4.5ms latency before dropping off some.
The VxRail had sub-millisecond latency until about 20K IOPS in the VDI FC Monday Login and went on to peak at about 93K IOPS with a latency of 3.1ms before a slight drop off.
Switching over to VDI Linked Clone (LC) Boot we see the VxRail made it until 159K IOPS before breaking 1ms and going on to peak at 195,062 IOPS with a latency of 2.2ms.
For VDI LC Initial Login, the VxRail made it to roughly 20K IOPS with sub-millisecond latency and peaked at about 56K IOPS with 3ms before dropping off.
Finally, VDI LC Monday Login saw the VxRail ride the 1ms line a bit and peaked around 60K IOPS with 3.9ms latency before a drop off in performance and a jump in latency.
Conclusion
The Dell EMC VxRail P570F is an all-flash HCI appliance that is geared toward performance. These new versions of VxRail bring easy-to-deploy HCI that are now built off a Dell EMC PowerEdge backbone. PowerEdge servers offer a slew of benefits for customers who are looking to use HCI, and the VxRail platform makes it easier than ever for customers to stay updated at the OS or even host level. As with most Dell EMC offerings, there is a massive variety of configurations, giving it the flexibility to hit just about any need. Being aimed at performance, the Dell EMC VxRail P570F can be outfitted with up to 3TB of memory per node, supports NVMe storage, and networking up to 25GbE.
During our Application Workload analysis, the P570F was able to hit an aggregate score of 12,585 TPS in SQL Server with an aggregate average latency of only 24.3ms. For Sysbench, the VxRail had aggregate scores of 8,645.9 TPS, average latency of 29.9ms, and worst-case scenario latency of 55.1ms.
For our VDBench performance, the Dell EMC VxRail P570F leveraged SAS storage, so while it started off in every benchmark with sub-millisecond latency, response times did increase above 1ms under intense workloads. This isn’t unexpected given the SAS3 flash media, however, it did record some fairly strong numbers. Highlights include 422K IOPS for random 4K read, 4.1GB/s for 64K sequential read, 1.53GB/s for sequential 64K write, 345K IOPS for SQL, 307K IOPS for SQL 90-10, 241K IOPS for SQL 80-20, 302K IPS for Oracle 90-10, 258K IOPS for Oracle 80-20, VDI FC Boot of 277K IOPS, and VDI LC Boot of 195K IOPS. From a latency standpoint, the HCI appliance ran from peak latency of 1.1ms to 5.64ms. While not sub-millisecond, it is still a strong showing for HCI.
Overall, the performance we saw from this appliance was very good, considering the target audience of this configuration. VxRail can certainly go faster, but the point here is to highlight a mainstream flash configuration. Again, we were impressed with the benefits of going the appliance route when it comes to vSAN, and VxRail Manager does the heavy lifting. The HCL is thoroughly vetted by Dell EMC at levels that go beyond what happens with vSAN Ready Nodes. Furthermore, the system itself updates everything down to device firmware, something that VxRail buyers see great value in. The worry of having to manage the nodes themselves goes away, making VxRail easy to own and manage.