Rambus, Microsoft Put DRAM Into Deep Freeze To Boost Performance

@tachyeonz : Energy efficiency and operating costs for systems are as important as raw performance in today’s datacenters.

Read More

Connect On:
Twitter :@tachyeonz

Advertisements

Build a super fast deep learning machine for under $1,000

@tachyeonz : Register for the O’Reilly Artificial Intelligence Conference, June 26-29 in New York City. Yes, you can run TensorFlow on a $39 Raspberry Pi, and yes, you can run TensorFlow on a GPU powered EC2 node for about $1 per hour.

Read More

Connect On:
Twitter :@tachyeonz

Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning

@tachyeonz : Deep learning is a field with intense computational requirements and the choice of your GPU will fundamentally determine your deep learning experience.

Read More

Connect On:
Twitter :@tachyeonz

Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?

@tachyeonz : Continued exponential growth of digital data of images, videos, and speech from sources such as social media and the internet-of-things is driving the need for analytics to make that data understandable and actionable. Data analytics often rely on machine learning (ML) algorithms.

Read More

Connect On:
Twitter :@tachyeonz

GPUs are now available for Google Compute Engine and Cloud Machine Learning

@tachyeonz : The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), helping you slash the time it takes to train machine learning models at scale using the TensorFlow framework.

Read More

Connect On:
Twitter :@tachyeonz

Announcing TensorFlow Fold: Deep Learning With Dynamic Computation Graphs

@tachyeonz : In much of machine learning, data used for training and inference undergoes a preprocessing step, where multiple inputs (such as images) are scaled to the same dimensions and stacked into batches.

Read More

Connect On:
Twitter :@tachyeonz

“Not so fast, FFT”: Winograd

@tachyeonz : Deep learning thrives on speed. Faster training enables the construction of larger and more complex networks to tackle new domains such as speech or decision making.

Read More

Connect On:
Twitter :@tachyeonz

Cloud 3.0: The Rise of Big Compute

@tachyeonz : As we have entered into 2017, the enterprise software industry is at the inflection point for ubiquitous cloud adoption as part of the $4 trillion dollar enterprise IT market transformation.

Read More

Connect On:
Twitter :@tachyeonz

The lost art of 3D rendering without shaders

@tachyeonz : You might use a 3D framework such as OpenGL or Metal. That involves writing one or more vertex shaders to transform your 3D objects, and one or more fragment shaders to draw these transformed objects on the screen.

Read More

Connect On:
Twitter :@tachyeonz

Five Things To Watch In AI And Machine Learning In 2017

@tachyeonz : Without a doubt, 2016 was an amazing year for Machine Learning (ML) and Artificial Intelligence (AI). During the year, we saw nearly every high tech CEO claim the mantel of becoming an “AI Company”.

Read More

Connect On:
Twitter :@tachyeonz

AI Is Super Hot, so Where Are All the Chip Firms That Aren’t Named Nvidia?

@tachyeonz : AI Is Super Hot, so Where Are All the Chip Firms That Aren’t Named Nvidia? This week, several of the big brains in artificial intelligence met in New York to discuss technical and societal challenges associated with building “intelligent” machines.

More

Tags : ai, amd, arm processors, artificial intelligence, baidu, chip, deep learning, gpu, ibm, intel, manufacturers, nervana, neural networks, nvidia, z

Published On:January 02, 2017 at 05:49PM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

Nvidia calls out Intel for cheating in Xeon Phi vs. GPU benchmarks

@tachyeonz : Nvidia has called out Intel for juicing its chip performance in specific benchmarks—accusing Intel of publishing some incorrect “facts” about the performance of its long-overdue Knights Landing Xeon Phi cards.

Click here to read more

Tags : #analytics, #artificialintelligence, #benchmarking, #datascience, #gpu, #machinelearning, 3, m

Published On:August 23, 2016 at 02:53PM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz