It's usual for Google to use AI (Artificial Intelligent) and its machine-learning capabilities to power its products. As a competitive and innovative company, it continues to look for ways to improvise. That includes open-sourcing its "brain".
On November 9th, 2015, the search giant is "releasing" TensorFlow, its AI originally developed by the people behind Google Brain, as an open-source software for anyone to use.
It began back in 2011 when Google Brain built its first-generation machine-learning system. Called DistBelief, its a proprietary system that aimed to power of the company's deep learning neural networks. This include products such as Google Search, Voice Search, Photos, Maps, Street View, YouTube, and its advertising serving platform.
After the project, the team behind Google Brain built the second-generation machine-learning system. Called the TensorFlow, it's able to run on multiple CPUs, GPUs, and CUDA extensions on both Linux and Mac OS X desktop and servers, as well as on Android and iOS.
Since many of Google's products are using TensorFlow, it's one of the things that powers the company's online empire.
How The "Brain" Works
TensorFlow is essentially a library of files that allows researchers and computer scientists to build systems that break down data. Computers with TensorFlow are fed with the data and will process the information to make decision. The method that is the base of machine learning, can be very complex, but using it can make computers smarter as it can think to provide answers without being asked or fed with variables.
Technically speaking, TensorFlow works with data with multiple values. These are passed along mathematical computation to more mathematical computation. These complex works are called tensors with each bits called nodes. Data finds its way through nodes to nodes, telling the system the relationship of the data. Tensors then flow though the graph of nodes. This is where the word "TensorFlow" came from.
Both DistBelief and TensorFlow can be used for research and production. As Google is releasing TensorFlow for public use, the company is lending a helping hand for those that want to use its "brain" for their benefit. Even if it's used to compete directly against Google.
The system that is a complete standalone library, is associated with tools and an Apache 2.0 license, so it can be used in commercial purposes.
Google is a company that is an active participant in academic research involving machine learning. By open-sourcing the project, Google is expecting people to use it. As more users are depending on it, and scientists began using it for research, Google is aiming for more insight and better grip on the growing field.
Deep learning, the sub-branch of machine learning, has been tested in more than 1,200 different product directories. This was a massive increase from mid-2014 when only around 300 are tested.
"Machine learning is a core transformative way by which we are rethinking everything we are doing," said Google's CEO Sundar Pichai. "We're in early days, but you will see us, in a systematic manner, think about how we can apply machine learning to all these areas."
By releasing the product as an open-source, researchers are having one another way to develop their ideas effectively and efficiently. While some ideas may not work with the available resources, researchers can move them directly into the product without having to rewrite the codes.
TensorFlow can be readily ported to other hardware platforms.
Shifting The Flow, Making A Trend And Creating New Demands
By open-sourcing TensorFlow, Google is showing that the computer universe is changing. When tech companies and software titans are trying to sell products based on the technology they develop, open-sourcing projects can accelerate the progress of the overall technology. This is because by making a product open-source, more people can use it and develop it based on its license.
Not only the strategy is marking a shift in software, it's also shifting hardware trends. This is significant to Google and other tech companies. While many tech companies are investing on AI and its method of processing, Google was the one who started the trend.
As an example, Facebook also uses GPU to train its face recognition services. But the company still relies on computer processors that are made up of many CPUs. Google's approach is different because it seeks greater level of efficiency by executing its AI on GPUs inside the data center. This is similar to Baidu's AI strategy.
Using this execution process, data fed into the GPUs can provide a greater efficiency if compared to CPUs. This enables them to make the processor to do multiple requests at a time, instead of just one. How GPUs and CPUs work is different. The only efficient way for GPUs to work is by constantly feeding them data. So the busier the GPU gets, the better.
The parallel data processing capability relies on neural networks - a system that resembles the human brain. These networks are designed to analyze massive amounts of data in just a small amount of time. In order to "teach" the system, people need to feed them a lot of data. And because GPU processing doesn't require as much power as CPUs, the processing method is also effective.
"This is quite a big paradigm change," said Baidu's Andrew Ng.
Because the way it works differ, the change is a good news to some hardware makers such as Nvidia that specializes in GPUs. Intel, the largest chip maker, is not producing GPUs. Some internet companies are exploring new ways of FPGAs (field-programmable gate arrays) as a replacement for GPUs in the AI field.
As AI is starting to play a bigger role in the global internet services, the move by Google is one of things that marks a change in tech's overall trend.
Since Google, Baidu and other tech companies, as well as smaller businesses use TensorFlow, the demand of GPUs will increase. Google uses TensorFlow to power many of its products. With billions of users accessing its numerous services, Google alone needs a whole lot of GPUs to keep TensorFlow improving.
AI In Mobile: From Internet Connection To Computing Power
Machine learning use GPUs to process data it feeds on. In the current fast-paced mobile world, tech companies are trying to put more of their efforts in using AI on mobile devices. However, no matter how good AI and machine learning are today, mobile devices are limiting their abilities.
Because GPUs are only efficient if they're constantly feeding on data, the data center where the software is located couldn't typically get enough data from mobile apps that are trying to feed it. This is because smartphone apps data doesn't feed its servers this way since they only request certain data process when something is needed.
So when machine learning is needed on a mobile device, for example when a user is using Google's Voice Search feature, the user couldn't make it do certain tasks without having to send data back to Google's data center where all the AI happens. When a user is saying something as a command to the app, the data should be sent to Google in order to be processed. Without connection, this feature is useless.
Google however has put a bit of its AI engine on its app to run as a standalone without needing the communicate with its servers. So in some cases a user cannot connect to the internet, the mobile phone can still execute certain commands by processing the data inside the phone itself.
An example for this is Google Translate. Google has made the app to recognize words, and by sending those words back to its servers, Google can then translate them to other languages. But users can "train" the app. When it's trained, the app can run on its own, doing the bits of translation without the having to connect to the internet.
The catch? Processing power. Even the most advanced smartphones are limited if compared to data centers Google owned. This prevent mobile devices to do sophisticated AI without connection.
This is where the future will answer. As hardware improves, it can power up the what was once limited. As mobile devices are becoming more powerful, it can do more and more AI tasks, moving parts of the AI processing inside the phone, decreasing the need of an internet connection.
As the current pace of mobile is already starting its GPU implementation, the world of hardware is changing even more. In order for hardware to catch up, it should change faster than the world of software.
Targeting Beyond With Something To Expect
In 2007, Google aims to dominate the mobile market by releasing Android, an open-source OS for mobile. Years later, it achieved what it wanted by having a huge market, whose dominance only comparable to Apple's iOS.
By getting its TensorFlow open-sourced, it laid the same strategy. TensorFlow is like the Android of AI. With it, Google is aiming to dominate the AI market, just like it did when conquering the mobile market with Android.
Many of Google is powered by AI. Not only it helps its services to work more effectively, it's making data processing more efficient. With more of its products are powered by AI, Google is making them "smarter" in many ways.
Google Voice Search and Now, are comparable to Apple's Siri. But they aren't just rival to the Apple's digital assistant as Google is using them as a gateway to its vast knowledge base. The two are just those among that use TensorFlow. The more people use Google and its products, the more Google will learn. So open-sourcing their "brains" will not only help others in creating things never before achievable, it also help Google in growing.
People are still into AI as a growing trend. Its neural networks are indeed impressive because they work similar to the human brain. But humans still outsmart these networks by far far margin. Despite they're still dumb in certain things, such as object recognition, AI did enabled computers to visualize, something that is complex and impossible decades ago.
Despite AI has never matched expectations and investments given to it, especially in terms of object recognition and speech, at the the current pace, AI is closing to human-level performance.