Ease of use, more big data than ever, and a proliferation of libraries and toolkits helped machine learning leap ahead for many
Until recently, machine learning was an esoteric discipline, used only by a few who understood the algorithms and had access to tons of data on which to employ it. But with big data technology becoming a commodity and algorithms easier to use, machine learning has moved out of the shadows and into the hands of citizen developers and regular users.
Here are four key steps to that development during the course of 2015.
Machine learning became easier to use
"Easier" is a relative term, but anything that simplifies the amount of work needed to get useful results from the intrinsic complexity of machine learning is a boon. Spark, the in-memory big-data processing toolkit, simplified machine learning with its relatively straightforward programming model -- available to more than Java users alone -- and its built-in array of machine learning algorithms.
IBM, neck-deep in reinventing itself as an API-driven, machine-learning-powered platform provider, also invested in Spark as a major catalyst for that work. But with machine learning, it has mainly focused on turning it into a commodity via the "cognitive computing" Watson platform, where everything from language translation to artificial vision are all an API call away.
Everyone and their brother released a machine learning library or toolkit
There's already a proliferation of machine learning tools, but some big names unveiled their own toolkits and libraries this past year.
Google offered TensorFlow, originally built as an internal tool but now bulked up to serve external parties as well.
Microsoft entered the game with DMLT, or Distributed Machine Learning Toolkit, which distributes machine learning problems efficiently across multiple computers.
Facebook opened up both its machine learning modules for the Torch frameworkand its hardware designs for GPU-accelerated machine learning.
Amazon, too, dove in with its tools for performing machine learning work on data stored in AWS.
Big data to feed machine learning also became cheaper and easier
Modern machine learning became what it is thanks to the rise of fast-moving, readily available, and cheaply stored data. Almost everyone with a hand in machine learning also had a hand in feeding it big data -- for instance, IBM's Watson and its Internet of things APIs. IBM in particular cut deals with third-party data providers like the Weather Channel to have even more sources of data that its customers can feed to their machine-learning-powered solutions.
The label "machine learning" was applied to way too many items
With any hot technology, is it inevitable that the label will be plastered indiscriminately on things that have little or nothing to do with it? It seems to be the case with machine learning, since the term can apply to a broad spectrum of processes and strategies. An open source machine learning toolkit is one thing, since the toolkit's innards can be examined by anyone.
But a proprietary product that claims to best the competition through machine learning is another story. Security products in particular seem the most willing to exploit the label, perhaps because their black-box nature makes it harder to tell if they're using machine learning in a manner worthy of the name.