Comments Locked

16 Comments

Back to Article

  • flgt - Monday, March 19, 2018 - link

    As general purpose computing starts to run out of steam whoever can bring developer friendly programmable logic to traditional software programmers will take the lead, Intel or Xilinx. It’s a tough problem to solve. As a more traditional FPGA user, I hope we don’t get left behind by the data center market though.
  • flgt - Monday, March 19, 2018 - link

    I did want to say this is a good approach. Have expert HW engineers at Xilinx build the most common accelerators for software programmers to invoke. There is so much logic available now that no one will blink an eye at the cost of the abstraction layer.
  • bryanlarsen - Monday, March 19, 2018 - link

    As a more traditional FPGA user, do you not think that much of this initiative will benefit you as well? It seems to me that converting hard blocks into something "semi-solid" will be good for you too.
  • flgt - Monday, March 19, 2018 - link

    You're right, up to this point it has been a win-win. We've gotten huge logic increases at a fraction of the cost from 10 years ago, and Xilinx has supported us well. My worry was this statement below from the article. If you're a low volume customer that needs a device for 10 years you might be SOL in the new era of semiconductor vendor consolidation. They're all chasing the big markets to pay for their billion dollar developments: 5G, datacenter/networking, autonomous vehicle, etc. Also, sometimes you need to know what's inside these vendor HDL/SW black boxes so using them get's tricky. That being said, FPGA's naturally scale across a number of broad markets and the vendors don't have to make as many tough product line choices as ASIC's.

    "Xilinx’s key headline that it is focusing on being a ‘Data Center First’ company. We were told that the company is now focused on the data center as its largest growth potential, as compared to its regular customer base, the time to revenue is rapid, the time to upgrade (for an FPGA company) is rapid, and it enables a fast evolution of customer requirements and product portfolio."
  • modport0 - Tuesday, March 20, 2018 - link

    Xilinx's HLS capabilities give decent results especially if you're at the prototyping stage for processing algorithms. The HLS tools from Mentor and Cadence are highly praised by ASIC developers but if your company is able to justify 6 digit license costs per seat per year. Intel's HLS tool just came out of beta so it's probably not worth using right now. Intel's OpenCL framework has been out for a while but I haven't personally used it.

    For their data center oriented framework, Intel has their OPAE (https://01.org/OPAE) software/driver stack coupled with the rest of their acceleration stack (https://www.altera.com/solutions/acceleration-hub/... Their hope is that the SW dev just writes the user application and the RTL person (maybe the same person) just creates accelerator modules (via HDL, HLS or OpenCL).

    Who knows if any of this ends up being used in data centers beyond research/small deployments. Google and I'm sure others are already looking into ASICs and skipped on FPGAs due to various reasons.
  • jjj - Monday, March 19, 2018 - link

    EMIB gets complicated when you have to align more than 2 chips and that seems to be why OSATs don't quite like such solutions and are looking at other budget options.
  • Krysto - Monday, March 19, 2018 - link

    I don't know what's the future of FPGAs, but if Xilinx wanted to beat Altera, now it's the time, as Intel will surely bungle it. Intel already seems to be re-focusing on its own GPUs, so I bet it's already regretting buying Altera and thinking how to get rid of it in 2-3 years without looking like complete fools (again).
  • amrs - Monday, March 19, 2018 - link

    It would be interesting to know just how much data center business Xilinx actually does. As a point of comparison, Dini Group's president said last year they've had zero sales with their data center oriented products.

    BTW, it's Zynq, not Zync.
  • ProgrammableGatorade - Tuesday, March 20, 2018 - link

    Dini makes great hardware but it's expensive, which could be skewing their perception of the market. If you're just buying one of their ASIC emulation boards it's easy to justify the cost, but if you're buying dozens or hundreds of boards you'll probably go to AlphaData, Bittware, or even Xilinx directly.
  • Space Jam - Monday, March 19, 2018 - link

    >The idea here is that for both compute and acceleration, particularly in the data center, the hardware has to be as agile as the software.

    This makes no sense given the properties of hardware development not catering to the adaptive, evolving nature of development in an agile philosophy but I suppose throwing everything and the kitchen sink at a problem is as close as you can get to an 'agile' solution in hardware development.

    >In recent product cycles, Xilinx has bundled new features to its FPGA line, such as hardened memory controllers supporting HBM, and embedded Arm Cortex cores to for application-specific programmability.

    First page, after "Arm Cortex cores" that should be a too not to.
  • davegraham - Monday, March 19, 2018 - link

    actually, the adaptive nature of hardware is becoming more and more interesting. they blew right by it in the article but with the introduction of CCIX, you will start to see the ability to have coherency within a system (similar but slightly different than Torrenza from a while back) for these plug-in accelerators. establishing this level of "fairness" and coherency amongst accelerators and giving them precedence (esp. on AMD drive x86 compute ;) ), will allow the development of much more agile hardware. you could also think of driving coherency thru, let's say, CCIX tunneled thru Gen-Z. ;)
  • iwod - Monday, March 19, 2018 - link

    1. Get it on AWS
    2. Get Netflix to contribute on codec encoding.
    3. Get Limelight to try figure this out for CDN.
    4. Partner with AMD EPYC
  • ZolaIII - Monday, March 19, 2018 - link

    Well FPGA is as a clean sheet of paper on which you can write what ever you want & then delate it & write something else so they are by the real mean of the world the universal accelerators when patched with enough storage RAM (which current HMB's still cannot provide) they become suitable for large data set's as scientific one's but they do need a direct lo latency RAM to achieve real usability of it. So this is just another small step into the right direction as as far as I understand this stil won't be 100% autonomous self stand operational. For that sizable number (four is perfectly enough) general purpose cores is required & not weak ones but also not high HPC one's let's say costume server ARM ones could fit in perfectly (there for I don't understand the statement how their don't need them) it will also require a powerful enough GPU (mobile licensable one's are perfectly fine) that could meat the need of detail accurate 3D model representation) not a huge crunchy desktop one's which would be a overkill in efficiency and many other things. As I understand they didn't put anything like that in. A basic 2D one simply won't fit anything more than base interface and result by numbers representation. So this is another step into the right direction but we still aren't going to be there with this. The main advantage of the FPGA is that it can be utilised to execute simultaneously medium numbers of variables tasks (by simply applying couple of design on partitions of programmable aria [a ASICS for this & ASICS for that or a real neural network, multiplier... basically anything as long as it can fit in]) & as soon as done apply for most suitable ones for new tasks and reprogram it (again part by part) & best of all it's never outdated as you always can program newer & more perfected algorithms. Intel for now did a bit different approach by adding limited area FPGA to the HPC many core Xeon so that FPGA remain only a second league player big enough to utilise couple smaller ASICS designs & suitable only for fast switch use as networking. Still me by that changes & their development of in haus GPU brings them a step closer to be able to make it autonomous self stand ones if nothing else it will simplify them an way to interconnect the GPU better. Interesting enough their still isn't any player in the industry that it can put it all together by him self. Intel has CPU design and FPGA but it lacks GPU, QC has a CPU & GPU but it doesn't have FPGA, Xilinx has only a FPGA & all tho IP licensing let's say Power VR graphics would fit in the GPU need they still can't license a powerful enough CPU core's as reference ARM designs aren't there yet (that's why vendors are making costume designs in the first place especially server suitable one's) but who knows me by this changes I'm the near future.

    At the end even when suitable autonomous platform appears as a SoC it will in the first place only be a developer platform for both scientific and commercial community & will take some time that it becomes useful to secondary developers (aka programmers) & only after that to general (consumers) public but never the less this is a way how things will got to go as we simply can't add more & more of dead black silicone no matter how much someone is lying to present it as most optimized & best suitable.
  • Threska - Monday, March 19, 2018 - link

    I see what they're trying and wish them well. However one of the biggest issues is FPGAs keeping up with everything else. Other is more having enough people with the needed skillset. Programming computers is different than FPGAs.
  • ZolaIII - Monday, March 19, 2018 - link

    That's why adaption after they actually produce complete SoC with dominant FPGA will have 3 stages engineering scientific one, adapted for high symbolic programming the second one & consumer one as the last & final one all tho first stage will be a never ending one.
  • modport0 - Tuesday, March 20, 2018 - link

    I wonder what the power consumption range of these are. It seems that Xilinx is going for the high-end. Outside of data centers (which are also concerned about power consumption), FPGAs are typically used for prototyping or other low volume applications.

    From what I hear from murmurs during conferences/conventions, despite all the PR, MS (uses Intel FPGAs) and others are struggling to justify continued use of FPGAs in data centers.

Log in

Don't have an account? Sign up now