Nvidia Announces the New Titan V Volta along with HMB2 Memory

Rate this post

Nvidia released a powerful Pascal architecture based GPU named TITAN X back in the month of July 2016. That GPU promised the users around 11 Teraflops of computing performance. However, earlier this year, they enhanced the performance of this GPU and increased its compute performance to 12 Teraflops. Not only that, they also promised a GPU with the name Titan V, based on the all-new Volta architecture by the end of this year. And it looks like; they are ready to fulfill the promised with the recently announced Nvidia TITAN V Graphics Card.

Nvidia made the announcement at the Conference Workshop on Neural Information Processing Systems event held in the Long Beach, California. As we have already mentioned before, this new GPU from Nvidia is based on the Volta architecture taking the compute performance figures to a whole new level. If we listen to the company, they have actually redesigned the Tensor cores of the exiting GPU, which then feature floating-point data paths as well as independent integer paths, increasing the performance by 9 times when compared to the previous TITAN Xp GPU. The newly announced GPU offers a computing speed of 110 Teraflops for Deep learning and 14 teraflops of Single-precision performance.

The Nvidia TITAN V was announced by the CEO and founder of the company, Jensen Huang at the NIPS conference and features 12nm GPU architecture along with latest Nvidia technological additions. You get excellent energy efficiency in the GPU as well.

Jensen Huang said, “Our vision for Volta was to push the outer limits of high-performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links”. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”, he added.

Talking about the GPU a bit more in detail, the TITAN V is based on the GV100 architecture that includes 320 texture units and 5120 CUDA cores. The number of cores is same as in the Tesla V100 GPU unit but there are additional 640 Tensor cores added to the Volta GPU. The GPU can handle up to 110 teraflops of computing performance which has been developed keeping the deep learning algorithms and AI related algorithms in mind. The core has been clocked at 1200 MHz by the company which can be boosted up to 1455 MHz as well.

The Volta GPU is not the only thing that they are offering to the buyers. You also get a 12 GB HBM2 memory along with it. It is the first time any of the Nvidia graphics cards is featuring an HBM2 memory as well. This memory has a data transfer rate of 1.7 GB per second and a 3072-bit memory bus included, thus offing a total bandwidth of 652.8 GB per second. The total bandwidth is more than the TITAN Xp GPU but Nvidia has cut down the bus interface from 4096-bit to 3072 bit in the entire GPU. The RAM has also been decreased from 16 GB to 12 GB. You can do anything with this graphics card, play heavy games, do professional work, run Ultra HD movies and what not.

Talking about the design of the GPU, TITAN V has the same NVTTM cooler that we saw in the GeForce Pascal 10 graphics card series. There is only one difference in terms of looks, and that is the name which is etched on the front side of the GPU. TITAN V comes in a gold die-cast body made up of Aluminum containing a good vapor chamber cooling mechanism for providing the best thermal cooling possible. The DrMOS 16 Phase PCB included in it has the thermal monitoring as well as integrated real-time current monitoring capability as well.

Jeremy Antoniuk, the COO for Deep Cognition said that “As an NVIDIA Inception partner, we trust NVIDIA to deliver innovative solutions to the AI market. Today, their solutions further evolve to better support an important phase of AI development: experimentation,”. “With TITAN V and NGC, Deep Cognition’s Deep Learning Studio has built a powerful desktop solution for AI development, iteration, and training.”, he added.

    Scott
     

    Click Here to Leave a Comment Below 0 comments

    Leave a Reply: