We All Scream for i-RAM

Gigabyte sent us the first production version of their i-RAM card, marked as revision 1.0 on the PCB. 


Click to Enlarge

There were some obvious changes between the i-RAM that we received and what we saw at Computex. 


Click to Enlarge

First, the battery pack is now mounted in a rigid holder on the PCB.  The contacts are on the battery itself, so there's no external wire to deliver power to the card. 

Contrary to what has been said in the past, the i-RAM still uses a Xilinx FPGA, which gets the job done, but is most likely slower and more expensive than a custom made chip. 

A Field Programmable Gate Array (FPGA) is literally an array of gates that can be programmed and reprogrammed to behave in virtually any fashion; the benefit of using a FPGA over a conventional integrated circuit is that a company like Gigabyte can just purchase a FPGA that is suitable for their application, rather than having to send their own IC design to a fab, which takes much more time and costs a lot more than just purchasing FPGAs for their initial product run.  FPGAs are often chosen because of their quick time to market, although they are more expensive to mass produce than ICs.  

The Xilinx FPGA has three primary functions: it acts as a 64-bit DDR memory controller, a SATA controller and a bridge chip between the memory and SATA controllers.  The chip takes requests over the SATA bus, translates them and then sends them off to its DDR controller to write/read the data to/from memory. 

Gigabyte has told us that the initial production run of the i-RAM will only be a quantity of 1000 cards, available in the month of August, at a street price of around $150.  We would expect that price to drop over time, and it's definitely a lot higher than what we were told at Computex ($50). 

The i-RAM is outfitted with 4 184-pin DIMM slots that will accept any DDR DIMM.  The memory controller in the Xilinx FPGA operates at 100MHz (DDR200) and can actually support up to 8GB of memory. However, Gigabyte says that the i-RAM card itself only supports 4GB of DDR SDRAM.  We didn't have any 2GB unbuffered DIMMs to try in the card to test its true limit, but Gigabyte tells us that it is 4GB. 

The Xilinx FPGA also won't support ECC memory, although we have mentioned to Gigabyte that a number of users have expressed interest in having ECC support in order to ensure greater data reliability. 

Although the i-RAM plugs into a conventional 3.3V 32-bit PCI slot, it doesn't use the PCI connector for anything other than power.  All data is transfered via the Xilinx chip and over the SATA connector directly to your motherboard's SATA controller, just like any regular SATA hard drive. 

Armed with a 64-bit memory controller and DDR200 memory, the i-RAM should be capable of transferring data at up to 1.6GB/s to the Xilinx chip; however, the actual transfer rate to your system is bottlenecked by the SATA bus.  The i-RAM currently implements the SATA150 spec, giving it a maximum transfer rate of 150MB/s. 

With SATA as the only data interface, Gigabyte made the i-RAM infinitely more useful than software based RAM drives because to the OS and the rest of your system, the i-RAM appears to be no different than a regular hard drive.  You can install an OS, applications or games on it, you can boot from it and you can interact with it just like you would any other hard drive.  The difference is that it is going to be a lot faster and also a lot smaller than a conventional hard drive. 

The size limitations are pretty obvious, but the performance benefits really come from the nature of DRAM as a storage medium vs. magnetic hard disks.  We have long known that modern day hard disks can attain fairly high sequential transfer rates of upwards of 60MB/s. However, as soon as the data stops being sequential and is more random in nature, performance can drop to as little as 1MB/s.  The reason for the significant drop in performance is the simple fact that repositioning the read/write heads on a hard disk takes time as does searching for the correct location on a platter to position them.  The mechanical elements of hard disks are what make them slow, and it is exactly those limitations that are removed with the i-RAM.  Access time goes from milliseconds (1 x 10-3) down to nanoseconds (1 x 10-9), and transfer rate doesn't vary, so it should be more consistent. 

Since it acts as a regular hard drive, theoretically, you can also arrange a couple of the i-RAM cards together in RAID if you have a SATA RAID controller.  The biggest benefit to a pair of i-RAM cards in RAID 0 isn't necessarily performance, but now you can get 2x the capacity of a single card.  We are working on getting another i-RAM card in house to perform some RAID 0 tests. However, Gigabyte has informed us that presently, there are stability issues with running two i-RAM cards in RAID 0, so we wouldn't recommend pursuing that avenue until we know for sure that all bugs are worked out.

Index i-RAM’s Limitations
Comments Locked

133 Comments

View All Comments

  • Hacp - Monday, July 25, 2005 - link

    It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.

    Also, nice article Anand!
  • zhena - Monday, July 25, 2005 - link

    mattsaccount you would need 3 cards to run raid 5.

    Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
  • ss284 - Monday, July 25, 2005 - link

    Raid 0 has a higher access time than no raid. Unless you were running highly disk intensive applications the snappiness would be attributed to ram, not the harddrive.

    -Steve
  • zhena - Monday, July 25, 2005 - link

    not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from.
  • JarredWalton - Monday, July 25, 2005 - link

    RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.

    RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller.
  • Gatak - Monday, July 25, 2005 - link

    Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion.
  • ss284 - Tuesday, July 26, 2005 - link

    I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead.
  • Gatak - Monday, July 25, 2005 - link

    RAID-0 ought to offer better random read access times as there are two disks that can read independently. Writing would be somewhat slower though as both disks need to be synced.
  • Gatak - Monday, July 25, 2005 - link

    I'd like to see some server benchmarks with this. For example:

    * mail server (especially servers using maildir is generating lots and lots of files)
    * web server
    * file server
    * database server (mysql, for example)

    Maybe some other benchmarks :D
  • mmp121 - Monday, July 25, 2005 - link


    He even states that on page 11:

    quote:

    One of the biggest advantages of the i-RAM is its random access performance, which comes into play particularly in multitasking scenarios where there are a lot of disk accesses.


    Anand, how about an update with some server / database benchies?

    Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte!

Log in

Don't have an account? Sign up now