POST A COMMENT

15 Comments

Back to Article

  • IanCutress - Thursday, January 17, 2013 - link

    Is the speed limitation going to be the controller or the PCIe bus in this?
    Does an FPGA type controller handle random data requests better than a normal SSD controller?
    Would the 3.2 TB model be a dual sided PCB due to the single FPGA and NAND sizes? I don't see anything for additional PCBs ala OCZ RevoDrive style.

    I spend too much time in the consumer world :) I have a friend who works as a VE and he recently got to test a few 80 core systems (8P x 10 core Intel w/HT). Totally envious.
    Reply
  • Kristian Vättö - Thursday, January 17, 2013 - link

    Fusion-io didn't release any performance specs, so honestly I don't know for sure. We should at least be very close to the 4GB/s barrier, which the PCIe 2.0 x8 provides.

    A custom silicon is usually better because it's solely designed for a specific purpose, whereas an FPGA is more like an all-purpose chip (it's obviously programmed to behave like a self-fabbed chip, though).

    The 3.2TB (at least) is a dual-PCB design but still a single-controller. David Flynn, the CEO of FIO, showed the card live here: http://new.livestream.com/ocp/winter2013/videos/95... (at around 18 minutes).
    Reply
  • wolrah - Thursday, January 17, 2013 - link

    "A custom silicon is usually better because it's solely designed for a specific purpose, whereas an FPGA is more like an all-purpose chip (it's obviously programmed to behave like a self-fabbed chip, though)."

    Whether custom silicon is better depends on why the FPGA is being used. In some devices it's used for its design purpose, the field programmability means that a firmware update can bring new features "in hardware" by reconfiguring the FPGA.

    Others just use it because the complexities and costs of making custom silicon for a low volume device can outweigh the cost of just including a FPGA compatible with the ones used for development.

    If we assume the custom silicon is perfect and won't need to be updated it'll generally be cheaper in sufficient volume and will certainly clock faster, but it is certainly nice to be able to fix bugs or add features in the field.
    Reply
  • liquan45688 - Thursday, March 21, 2013 - link

    I do not see any DRAM on the product. is it possiable to reach such high data thoughtput without any cache? Reply
  • JPForums - Thursday, January 17, 2013 - link

    Is the speed limitation going to be the controller or the PCIe bus in this?
    Does an FPGA type controller handle random data requests better than a normal SSD controller?

    Hard to say. It really depends on the specific FPGA and the controller design implemented therein. Assuming Fussion-IO are experts capable of both selecting and fully exploiting and appropriate FPGA, I'd lean towards the PCIe bus. However, the "budget" nature of this card may dictate that the FPGA used is less capable. That said, I still think they'd be close given their track record.

    Does an FPGA type controller handle random data requests better than a normal SSD controller?

    Yes and no. A normal SSD controller is an ASIC and therefore more specifically purposed. Given the same architecture, they will generally have less latency, die area, and power consumption than an FPGA. An FPGA is more general purpose, which gives it more flexibility. Most current SSD controllers use 8 or 10 channels in an attempt to extract more speed from the SSD. However, once all channels are populated adding more flash chips will increase capacity, but not performance. To increase performance from a controller perspective, you must either create a new controller design with more channels, or use multiple controllers in parallel (RAID). This is where the FPGA's flexibility allows it to surpass standard controllers. With an FPGA, the same chip may be reprogrammed to utilize as many channels as makes sense for a given capacity. The limitation then becomes how many pins are available. Also, an FPGA can be redesigned to implement helpful features later in the design to relieve bottlenecks where it is far more rare to respin an ASIC design to address bottlenecks. These bottlenecks would normally be addressed in the next generation.

    An ASIC running the same architecture would definitely perform better than its FPGA counterpart, however, the expense of fabricating a chip for each capacity you want to offer is prohibitive (especially if you offer many capacities). An FPGA also makes more sense for low quantity runs. While the cost per chip is lower for an ASIC (mostly due to smaller size) the upfront design and initial fabrication costs are much higher than an FPGA. Thus, you have to ship quite a few chips to make up the cost of ASIC design and fabrication.
    Reply
  • blanarahul - Thursday, January 17, 2013 - link

    It will be very interesting to see how this compared to the micron p320h.
    In my opinion the p320h should fare better because micron is making all components themselves.
    Reply
  • Kevin G - Thursday, January 17, 2013 - link

    I'd love to see that comparison as well. Though Micron makes the controller and NAND, that is no guarantee that it'll perform better. Reply
  • JPForums - Thursday, January 17, 2013 - link

    In my opinion the p320h should fare better because micron is making all components themselves.

    I wouldn't count Fussion-IO just yet. Micron definitely has an advantage in cost and quality of flash chips given they essentially get first pick. However, if implemented properly, Fussion-IOs FPGA based single chip controller will be able to extract more parallelism with less latency. This is potentially a far greater advantage. I can't wait for a review to see if they can pull it off.
    Reply
  • blanarahul - Thursday, January 17, 2013 - link

    Another one: How does io-Scale compare to Fusion-io's ioDrive2? Reply
  • Guspaz - Thursday, January 17, 2013 - link

    "Compared to traditional 2.5" SSDs, the ioScale provides significant space savings as you would need several 2.5" SSDs to build a 3.2TB array."

    2.5" 9.5mm SSDs come up to 2TB, and this PCIe card is definitely bigger, on a volumetric space consumed basis, than two 2.5" drives.
    Reply
  • JPForums - Thursday, January 17, 2013 - link

    2.5" 9.5mm SSDs come up to 2TB, and this PCIe card is definitely bigger, on a volumetric space consumed basis, than two 2.5" drives.


    I'm pretty sure they were comparing to enterprise drives. Hence the comparison to the Intel SSD 910. You do have a good point though as the 2TB consumer drives of today point to 2TB enterprise drives in the not too distant future. I suppose it's all about timing.
    Reply
  • boozed - Thursday, January 17, 2013 - link

    "Since hyperscale computing is all about efficiency, it's also common that commodity designs are used instead of more pricier blade systems."

    You've already said "pricier," so "more" is redundant.
    Reply
  • capeconsultant - Thursday, January 17, 2013 - link

    True dat. Reply
  • Kristian Vättö - Friday, January 18, 2013 - link

    Good catch, fixed. Reply
  • liquan45688 - Thursday, March 21, 2013 - link

    I have not see any DRAM on the product, does the SSD can support such hight throughput without DRAM cache? strange! Reply

Log in

Don't have an account? Sign up now