02:12PM EDT - And that's a wrap. We'll have more details as the show continues

01:49PM EDT - Final announcement: NV is giving everyone a free SHIELD

01:48PM EDT - All of this is CUDA

01:45PM EDT - Nothing today about Maxwell for Tesla?

01:45PM EDT - Jen-Hsun starting his wrap-up

01:44PM EDT - Swappable module

01:43PM EDT - Connect's computer is based on Tegra K1

01:42PM EDT - The Audi Connect

01:42PM EDT - Audi brought a car. It's driving itself on stage

01:40PM EDT - Audi is using Tegra SoCs for their car's collision sensors

01:39PM EDT - Now on stage: Andreas Reich, head of pre-development for Audi

01:38PM EDT - Erista seems to the the son of Wolverine (i.e. Logan): http://www.comicvine.com/erista/4005-60735/lists/

01:37PM EDT - Erista is MAxwell based. Scheduled for 2015

01:37PM EDT - Next generation Tegra announced. Erista

01:37PM EDT - VisionWorks is a computer vision software package. Does vision, computational photograpy, augmented reality, and more

01:36PM EDT - Jetson will come with NVIDA's new VisionWorks middleware

01:36PM EDT - NVIDIA normally names mobile products after comicbook heroes. Not sure where "Jetson" fits in

01:35PM EDT - $192

01:35PM EDT - Tegra K1 on a board for development

01:34PM EDT - New development kit: Jetson TK1

01:32PM EDT - What to do with a CUDA-capable K1? Computer vision

01:30PM EDT - Reiterating Kepler K1, and how it can run CUDA

01:30PM EDT - Discussing how the top 10 computers on the Green 500 list are Kepler powered

01:29PM EDT - Now up: Mobile CUDA

01:26PM EDT - Various VMWare platforms (ESX, Horizon, Horizon Daas) now runs on GRID

01:24PM EDT - Now on stage: VMware

01:21PM EDT - Up next: GRD and GPU cloud computing

01:20PM EDT - Discussing uses for the box. Virtual showrooms, etc

01:18PM EDT - Iray VCA: $50,000

01:14PM EDT - Rendering a Honda car on 19 Iray VCAs

01:12PM EDT - Demoing Iray VCA

01:11PM EDT - Looks like a variant of NVIDIA's existing GRID VCA hardware

01:11PM EDT - Full GK110 GPUs (2880 CUDA cores each)

01:11PM EDT - 8 Kepler GPUs, 12GB per GPU. Connectivity: GigE, 10GigE, and Infiniband

01:10PM EDT - New appliance: Iray VCA. Scalable rendering appliance

01:10PM EDT - Next demo: Iray, photorealistic rendering

01:08PM EDT - Running on Titan Z

01:07PM EDT - Real time demo. Was not a recorded demo

01:06PM EDT - Real simulation. Real destuction. Real violent

01:05PM EDT - Unreal Engine 4 demo. Follow-up to last week's announcement of GameWorks integration

01:03PM EDT - Kinematics, fluids, and voxels

12:59PM EDT - Name: Flex

12:59PM EDT - Next: real time unified physics solver

12:57PM EDT - Another dmo. Showcasing water rendering with a physical simulation behind it

12:55PM EDT - What to use TItan Z for? Graphics isn't enough, need to do physics simulations to make the underlying model more accurate

12:54PM EDT - 5760 CUDA cores and 12GB of VRAM means we're looking at fully enabled GK110 GPUs

12:53PM EDT - Promoting it to the GTC crowd as a supercomputer in a PCIe form factor (though no doubt you'd be able to game on it if you really wanted to)

12:52PM EDT - Uses a blower design similar to GTX 690. 1 central fan, split blower with half the hot air going out the front, and the other half out the back

12:52PM EDT - 5760 CUDA Cores, 12GB VRAM, $3000

12:51PM EDT - GeForce GTX Titan Z

12:51PM EDT - Dual GPU time

12:51PM EDT - Video time

12:50PM EDT - Titans are selling like hotcakes. Cheap double precision performance that makes it more affordable and more available than Tesla

12:49PM EDT - Reiterating GTX Titan

12:49PM EDT - And now graphics

12:49PM EDT - Up next: big data

12:48PM EDT - Wrapping up discussion of machine learning

12:45PM EDT - Now training a neural net with dogs. Can it identify dogs and their breeds?

12:42PM EDT - Green vs. Red...

12:40PM EDT - Ferraris and NVIDIA products. Yep, it's a Jen-Hsun keynote.

12:39PM EDT - Neural network training demo

12:38PM EDT - Now how Stanford built their own brain using GPUs. 4KW and $33K

12:36PM EDT - (Obligatory Yoda/Yotta joke)

12:35PM EDT - ~30 ExaFLOPS versus ~150 YottaFLOPS

12:34PM EDT - The human brain is still orders of magnitude more complex and connected

12:34PM EDT - "Google Brain" supercomputer. $5mil USD, 600KW, for researching computer vision

12:33PM EDT - Even at a technical conference, we're still worshiping pictures of cats on (projected) walls

12:31PM EDT - Still discussing use cases. On computer vision at the moment, talking about Stanford and Google's research into the matter

12:30PM EDT - The Pascal announcement follows NVIDIA's earlier Volta announcement from last year. Stacked memory was already on the schedule. NVLink is new, however

12:29PM EDT - Not entirely clear how the prototype connects to buses though (we haven't seen the underside)

12:28PM EDT - The Pascal prototype appears to be almost all power regulation circuitry, other than the chip itself. No memory is seen off of the package

12:27PM EDT - Now discussing the kinds of computational tasks they have in mind for Pascal

12:26PM EDT - Maxwlel is still on the schedule though. Maxwell will be 2014, Pascal will be 2016. So everything we're seeing so far is 2 years off

12:26PM EDT - Volta is missing from NV's new GPU schedule. Volta = Pascal?

12:25PM EDT - Usses NVLInk and 3D Memory. Memory will be 2-4x prior GPUs on memory bandwidth and size

12:25PM EDT - So NVIDIA already has silicon, if this isn't being faked

12:24PM EDT - Sample unit is on a card 1/3rd the size of a PCIe card

12:23PM EDT - http://en.wikipedia.org/wiki/Blaise_Pascal

12:23PM EDT - Chip name: Pascal. Named after Blaise Pascal

12:22PM EDT - (Couple of years, even)

12:22PM EDT - "In just a coupe of years we're going to take bandwidth to a whole new level"

12:22PM EDT - Capacitiy has yet to be discussed though. Might this end up similar to an additional cache level, ala Intel's Crystalwell for CPUs

12:21PM EDT - 3D memory will be using Through Siicon Vias (TSVs) to connect the DRAM chips through each other

12:21PM EDT - Should improve both bandwidth and energy efficiency. Ultra-wide memory interface (thousands of bits wide)

12:20PM EDT - Solution: 3D packaging. Stack memory with the chip

12:19PM EDT - How do you increase memory bandwidth when you're already running a wide GDDR5 memory bus?

12:19PM EDT - Next subject: memory bandwidth

12:19PM EDT - "Next generation GPU", not naming it so far

12:18PM EDT - 5-12x increase in bandwidth

12:18PM EDT - Discussing using it for both GPU-to-CPU and GPU-to-GPU communication

12:18PM EDT - Already discussing multiple generations. It sounds like they're looking at modifying PCI-Express to their needs

12:17PM EDT - Introducing NVLink. A chip to chip communication bus. Differential bus

12:17PM EDT - First announcement

12:16PM EDT - PCI-Express is only 16GB/sec, versus 288GB/sec for local GPU memory. A factor 18

12:16PM EDT - Referencing a paper that points out the various bottlenecks, including a lack of GPU-to-GPU bandwidth

12:15PM EDT - Now discussing bandwidth. FLOPs per byte

12:14PM EDT - Jen-Hsun says he'll be focusing on big data, cloud computing, and computer vision

12:12PM EDT - Largeet GTC ever (again). Nearly 600 talks scheduled

12:11PM EDT - Discussing GPU/CPU synergy, how they specialize at different types of work, and how CUDA ties them together

12:10PM EDT - Jen-Hsun is welcme the crowd and reiterating how Tesla and CUDA was the reason this conference was created in the first place

12:09PM EDT - Jen-Hsun is now on stage

12:07PM EDT - NVIDIA is starting with a promo reel of various devices and technologies powered by their products

12:06PM EDT - And here we go

12:05PM EDT - As a reminder, Maxwell's marquee feature is to be unified virtual memory, though we'd expect there to be additional features that have yet to be disclosed

12:03PM EDT - So Big Maxwell (aka Second Generation Maxwell) may make an appearance, to prep developers for Maxwell based Teslas

12:01PM EDT - Of course the big question is what we'll see of Maxwell this year. In 2012 NVIDIA released a ton of details on Big Kepler (GK110), despite the fact that it wouldn't ship to buyers until the end of the year

12:00PM EDT - Jen-Hsun will likely be grilled on the state of NVIDIA's SoC business. Tegra 4 has barely made a splash, and Tegra 4i (A9 + baseband) was famously pushed back last year to get Tegra 4 to market sooner

11:58AM EDT - For NVIDIA CEO Jen-Hsun Huang this will be a very tense day. Shortly after delivering the company's annual keynote, he will get to go face investors for NVIDIA's annual investor meeting and update

11:57AM EDT - Expect status updates on all of NVIDIA's core businesses, especially Quadro/GRID and Tesla. There may be some consumer news too, but NV has already shown off their SoC plans at CES 2014, and GTC isn't normally big on consumer dGPU news

11:56AM EDT - Now as for today's keynote, it's scheduled to run for 2 hours, so there will be no shortage of material to talk about

11:55AM EDT - NVscene was last held in 2008, before NVIDIA had formed the more permanent GPU Tech Conference. It may end up being the biggest scene party in North America this year

11:54AM EDT - http://nv.scene.org/

11:54AM EDT - Also taking place this year is NVscene, NVIDIA's sponsored scene party, competition, and educational sessions

11:53AM EDT - And it looks like NVIDIA may just fill most of those spaces, judging by the map

11:52AM EDT - This year the San Jose Convention Center has finally finished renovations. So not only is there no pesky construction equipment in the way, but there's more room than ever for events

11:48AM EDT - Ryan is on text today, Anand is on image duty, we're seated and ready to go

POST A COMMENT

50 Comments

View All Comments

  • blanarahul - Tuesday, March 25, 2014 - link

    Umm... Is Brian at the HTC Keynote? Reply
  • extide - Tuesday, March 25, 2014 - link

    Wow, so they basically renamed Volta to Pascal, and moved Unified memory from Maxwell to Pascal/Volta. Interesting... Reply
  • Ryan Smith - Tuesday, March 25, 2014 - link

    Maxwell was unified virtual memory. Big emphasis on virtual. Reply
  • extide - Tuesday, March 25, 2014 - link

    Not according to this: http://images.anandtech.com/doci/7894/GTC-2014-021... ..? Reply
  • blanarahul - Tuesday, March 25, 2014 - link

    Yeah. Maxwell's dedicated to new DX12 features and power efficiency I guess. Reply
  • grahaman27 - Tuesday, March 25, 2014 - link

    According to previous roadmaps, erista should be maxwell based and utilize a 16nm finfet process. Reply
  • iMacmatician - Tuesday, March 25, 2014 - link

    I'd like to see 1 Yodaflop of performance. Reply
  • psychobriggsy - Tuesday, March 25, 2014 - link

    12:48PM EDT - Wrapping up discussion of machine learning

    And much rejoicing was had.
    Reply
  • Ken_g6 - Tuesday, March 25, 2014 - link

    Ack! I gagged when I saw the $3000 price tag for a GPU. I guess top-end prices keep climbing. :( Reply
  • RealiBrad - Tuesday, March 25, 2014 - link

    its not really a new top end card, its more of 2 markets merged into a single card. Its a workstation/gaming card.

    No doubt that some will buy them for epeen though.
    Reply

Log in

Don't have an account? Sign up now