10 Algorithms Every Programmer Should Know – and When to Use Them

@tachyeonz : Programmers love algorithms. What’s an algorithm? Good question! In my academic days, we would have said, “An algorithm is a well-defined, self-contained process or set of rules to be followed in a data processing system.”

Read More

Connect On:
Twitter :@tachyeonz

Advertisements

An overview of gradient descent optimization algorithms

@tachyeonz : Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g.

Read More

Connect On:
Twitter :@tachyeonz

TensorBoard: Visualizing Learning

@tachyeonz : The computations you’ll use TensorFlow for – like training a massive deep neural network – can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, we’ve included a suite of visualization tools called TensorBoard.

Read More

Connect On:
Twitter :@tachyeonz

The Kernel Cookbook:

@tachyeonz : If you’ve ever asked yourself: “How do I choose the covariance function for a Gaussian process?” this is the page for you. Here you’ll find concrete advice on how to choose a covariance function for your problem, or better yet, make your own.

Read More

Connect On:
Twitter :@tachyeonz

Bayesian optimization with scikit-learn

@tachyeonz : Choosing the right parameters for a machine learning model is almost more of an art than a science. Kaggle competitors spend considerable time on tuning their model in the hopes of winning competitions, and proper model selection plays a huge part in that.

More

Published On:January 11, 2017 at 02:25AM

Connect On:
Medium : http://www.medium.com/iiot
Twitter :@tachyeonz

An overview of gradient descent optimization algorithms

@tachyeonz : Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g.

Click here to read more

Tags : #adagrad, #adam, #algorithms, #batch, #gradientdescent, #machinelearning, #mini-batch, #momentum, #newton, #optimization, #rmsprop, #stochastic, m

Published On:December 04, 2016 at 12:35AM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

5 Techniques To Understand Machine Learning Algorithms Without the Background in Mathematics

@tachyeonz : Where does theory fit into a top-down approach to studying machine learning? In the traditional approach to teaching machine learning, theory comes first requiring an extensive background in mathematics to be able to understand it.

Click here to read more

Tags : #algebra, #machinelearning, #mathematics, #matrices, #optimization, #statistics, m

Published On:November 27, 2016 at 10:51PM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

Machine Learning Performance Improvement Cheat Sheet

@tachyeonz : The most valuable part of machine learning is predictive modeling. This is the development of models that are trained on historical data and make predictions on new data.

Click here to read more

Tags : #algorithms, #machinelearning, #models, #optimization, #statistics, #techniques, m

Published On:November 23, 2016 at 01:08AM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

The AMPL Book

@tachyeonz : by Robert Fourer, David M. Gay, and Brian W. Kernighan Written by the creators of AMPL, this book is a complete guide for modelers at all levels of experience.

Click here to read more

Tags : #ampl, #ide, #language, #optimization, #programming, m

Published On:November 19, 2016 at 08:47AM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

An overview of gradient descent optimization algorithms

@tachyeonz : This article was written by Sebastian Ruder. Sebastian is a PhD student in Natural Language Processing and a research scientist at AYLIEN. He blogs about Machine Learning, Deep Learning, NLP, and startups.

Click here to read more

Tags : #algorithms, #gradientdescent, #machinelearning, #optimization, m

Published On:November 12, 2016 at 09:08AM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz

The Kernel Cookbook:

@tachyeonz : If you’ve ever asked yourself: “How do I choose the covariance function for a Gaussian process?” this is the page for you. Here you’ll find concrete advice on how to choose a covariance function for your problem, or better yet, make your own.

Click here to read more

Tags : #gradientboosting, #kernel, #optimization, m

Published On:August 21, 2016 at 09:27PM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz
Web: (http://ift.tt/1Xt3ZmT)

Learning Deep Features for Discriminative Localization

@tachyeonz : In this work, we revisit the global average pooling layer and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels.

Click here to read more

Tags : #deeplearning, #neuralnetwork, #optimization, m

Published On:August 21, 2016 at 03:50AM

Connect On:
Facebook : /tachyeonz
Twitter :@tachyeonz
Web: (http://ift.tt/1Xt3ZmT)