James Candelaria, CTO of WHIPTAIL, demonstrates the latest addition to the product family, INVICTA. This new scale-out Flash storage cluster is described in three steps: its architecture, interface, and some real-world benchmarks. First, you will see INVICTA‘s storage capacity ranging from 3-12TB allowing for up to 72TB in a single management domain. Then you will navigate through the user interface that depicts the multiple fabric attach points, as well as the nodes of storage, where the user is able to manage and operate various media operations. Finally, the performance monitor is demonstrated, which allows you to view individual bricks or simply view performance as a whole. Watch and learn how to manage INVICTA‘s user-friendly interface on your own.
Hi, I’m James Candelaria, CTO of WHIPTAIL and today I’ll be demonstrating our newest product offering scheduled to launch in Q2, project INVICTA. INVICTA is a revolutionary new scale out flash storage cluster that builds upon with WHIPTAIL legacy of delivering of the benefits of flash technology while mitigating it deficiencies. First let’s describe the architecture, briefly demonstrate the administer interface and finally I’ll close with some real-world benchmarks. Here we go. Step one in this case is logging onto the interface. As you can see, our interface depicts multiple fabric attach points also known as silicon storage routers, along with multiple nodes of storage beneath it, up to six in total, ranging from capacity points of three terabytes to 12 terabytes allowing us to get up to 72 terabytes in a single management domain. Clicking on a shelf of storage will bring up an exploded view of media elements that make up the shelf, complete with right click context menus that allow us to perform various media operations ranging from gathering information to failing a troubled media element. In the right hand corner of the screen you’ll also notice a media wear indicator. One of the cornerstones of WHIPTAIL’s technology over the years has been endurance management and this tradition has been continued with INVICTA. As with XLR8r we guarantee a media life of over seven years and this graph simply renders the remaining media life visually. Moving on, let’s talk a little bit about network settings. Each individual SSR has its own network settings. This page visually depicts the number of interface bonds we have, in this case both a 1 gig and 10 gig bond, as well as what Fibre Channel interfaces we have attached to the fabric. Again, with right-click context menus, we’re able to modify the properties of existing elements or create new elements. If we wanted to create a VLAN interface on bond one with 10 gig IF, I simply right click and hit create VLAN and fill in this mode window. In the case of Fibre Channel, we can disable or enable elements as required by our fabric administrator. Bonding additional network interfaces to a network bond is as simple as right clicking the bond, hitting configure devices and then presented with our current network slaves, as well as unbonded interfaces. We utilize the drag and drop motif here to allocate or de-allocate network interfaces as required. In this case I’m going to de-bond interface two from bond one by dragging it from our current slaves to our un-bonded interface section and when this go agent comes back, the bond will be broken. At the heart of any block storage array is the LUN creation and management. In the longstanding tradition of WHIPTAIL, we’ve provided a very simple, but powerful way of providing LUN management. Creation of a LUN is as simple as clicking this create new LUN button and filling in the appropriate details. In this case we need to provide a name, we’ll call this demo video, and the LUN size and in this case we’ll create a 100 gigabyte LUN and choosing a volume group. In this case, we have volume group five has the most available space, we’ll go ahead and create it in volume group five. Hit create. Notice that we’re not prompted to create a new RAID set, create a new arrogate, create a new volume group. We simply needed to choose the preexisting volume group, choose a size, and away we go. Once LUN is created we are then able to assign it to an initiator group. Clicking here on initiators, you’ll see that I have some predefined initiators. In this case a couple of Fibre Channel hosts and in this case I will go ahead and modify existing Fibre Channel group one, hit map LUNs and I’ll be presented with a list of LUNs that are already mapped to this initiator group, as well as a list of LUNs that can be assigned to the initiator group. We can either utilize the drag and drop motif here to assign LUNs or we can utilize right-click context menus. In this case I have a LUN labeled demo video, let’s go ahead and right-click this, click map one and you’ll see that the interface automatically recommends a map ID. This map ID is the lowest non-allocated map ID for the initiator group that we’re working with. In this case, I can either accept the default or I can override it. I am going to go ahead and accept the default here, hit create and the LUN is now mapped. The initiator simply needs to do a rescan to ensure that it has visibility and should I decide that I no longer want this LUN mapped, it’s a simple matter of dragging and dropping into the appropriate place or right-clicking and clicking on map one. In this case, I’m going to utilize drag and drop and wait for this GUI applet to return and the LUN will then disappear from my mapped list and reappear in my available LUNs category. Now that we’ve shown you how to create and assign LUNs, let’s talk about how we can monitor the performance of both INVICTA as a whole, as well as with individual LUNs. First, let’s move back to the home screen where we can take a look at the performance of the cluster/brick level and then we’ll drill down to a particular LUN. As you can see, this is my main dashboard. Below the representation of the individual shelves, we have a live performance graph. This performance graph will update every 15 seconds and has the ability to be filtered by individual bricks or to look at the performance as a whole. If we mouse hover over an individual brick, you’ll notice that the performance graph changes context. Should I want to stabilize that view, I can simply move and click on an individual tab and that tab will show us just the brick in question. If I want an aggregated view, all I need to do here is click on all and we can see the aggregated performance metric of the entire cluster as a whole. Now you’ll see in the background I’ve just initiated a performance benchmark. You’ll notice now that the graph is now changing pretty dramatically, upwards. If we mouse hover over any individual data point, you’ll notice that it will actually give us the number of megabytes per second that are being pushed, along with the number of IO operations per second. Since I’m currently in the aggregated view, you’ll see that I’m doing about 70,000 IO operations per second through this SSR. This is split fairly evenly between my individual storage bricks. Storage brick one, two, three, four, and five, all have slightly different curves, but they’re fairly reasonably distributed amongst all our elements. Let’s take a further look by drawing down our individual LUNs and seeing if the performance differs a bit there. To accomplish this, we simply go into LUN management and we will highlight the LUNs that we wish to interrogate, and then we will go ahead and right-click these LUNs and click check performance. This will bring up an aggregated view, as well as individual views of individual LUNs with again complete with tool tips that will show us how many megabytes per second and how many IOPs per second are going through that individual LUN. Again, these graphs are live and will continue to update as time progresses. This pretty much completes our tour of the INVICTA UI. I hope you’ll stay with me as we progress into some performance benchmarking. I’ll see you in a moment. Thanks.