Learn By Implementation – K-Nearest Neighbor

Welcome to the first entry in the Learn by Implementation series. The purpose of this series is to help learn about how to work with the tripod that is, Nd4j, Canova, and Dl4J, by working through an implementation. Generally the code being implemented will be a known, and well understood algorithm (usually Machine Learning related), but I won’t hold myself to such strict rules. Whatever follows this post in the series will always focus on explaining and exploring different aspects of the tripod. The typical flow will include, introducing the algorithm, walking through the main aspects of it, and then programming it. The programming side will go into lengths about various features of the three libraries (whichever are needed for the project), and discuss design decisions.

If you haven’t set up Deeplearning4j, check out this tutorial first!

K-Nearest Neighbor (KNN)

Knn architecture

Continue reading

DL4J (Deeplearning for Java) – Getting Started

UPDATE: Hey guys this tutorial has aged poorly when it comes to working with the newest version(s) of DL4J. The descriptive material found here is still fine (though dated). Here’s a small and quick update to get started.

ND4J, Canova & DL4J

I’m just going to take a quick moment to break down these three libraries and identify what they are. If you are interested in just getting started read ahead.

  1. Introduction to the Deeplearning tripod
  2. Getting starting
    1. Set-up using the command prompt
    2. Set-up using Eclipse

ND4J

ND4J is an N-dimensional Array scientific computing library for Java, meant to rival the offerings of the likes of numpy. According to the ND4J page, it can/will out perform numpy with the right back end. In short, it’s an easy to use API, that operates at high efficiency without ever being locked to one linear algebra library.

Back End?

That’s right, ND4J itself is a consolidation of well established and highly optimized BLAS (Basic Linear Algebra Subprograms) under a unified API. What does that mean for you? It means that you are able to switch to various Linear Algebra libraries without ever having to touch your code. It means you can port your code that runs on ND4J to the GPU with a simple swap in the maven configuration file (pom.xml, we’ll talk more about this later).

Visit the ND4J webpage for more information.

Canova

Canova exists to take raw data and convert it to many standardized vector formats which are easily loaded into Machine Learning pipelines. By raw data, we can think of images, sound, video, and so forth. This library is still under development, but serves as an important piece to the tripod of deep learning with Java.

DL4J

Deeplearning4j, or Deeplearning for Java, is a comprehensive deep learning offering for Java. It’s attempting to fill the role that Torch fills for LUA, or Theano for python. To compare these libraries directly may not be fair, given their different life spans, but it’s definitely a way to think about them. DL4J builds upon the ND4J offerings, this means that any algorithm in DL4J can be configured to utilize various BLAS backends. I.E. GPU support out of the box.

One of the more attractive features of DL4J is how configurable it is. Most if not all major Neural Network frameworks have been implemented, and their flexibility is what data scientists and machine learning enthusiasts dream of. What’s more, the networks can easily be piped into eachother with relative ease. Really, there is too much to talk about, and much of it I have yet to fully explore. So lets jump into getting set up, and after you can join me as I’ll continue to dig into this framework.

One last thing to note, if the ND4J backend scalability was not already attractive enough, DL4J offers context hooks for Hadoop and Spark. In what's described as, prototype locally, then scale your pipeline with Hadoop and/or Spark with little to no modifications. The library is built with distributed computing in mind.

Continue reading