AWS\MICROSOFT Gluon Vs GOOGLE TensorFlow | INTERVIEW QUESTION

GOOGLE TENSORFLOW VS AWS/MICROSFT GLUON
(most important interview question)


Gluon AWS/Microsoft:
Gluon is a new open source deep learning interface launched by AWS and Microsoft(12 Oct 2017) which allows developers to more easily and quickly build machine learning models, without compromising performance.
Gluon can be used with either Apache MXNet or Microsoft Cognitive Toolkit, and will be supported in all Azure services, tools and infrastructure. Gluon offers an easy-to-use interface for developers, highly-scalable training, and efficient model evaluation–all without sacrificing flexibility for more experienced researchers. For companies, data scientists and developers Gluon offers simplicity without compromise through  high-level APIs and pre-build/modular building blocks, and more accessible deep learning.


Gluon makes it easy for developers to learn, define, debug and then iterate or maintain deep neural networks, allowing developers to build and train their networks quickly. Gluon introduces four key innovations. 
  • Simple, Easy-to-Understand Code: Gluon is a more concise, easy-to-understand programming interface compared to other offerings, and that it gives developers a chance to quickly prototype and experiment with neural network models without sacrificing performance. Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
  • Flexible, Imperative Structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
  • Dynamic Graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
  • High Performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides.

TensorFlow Google:
TensorFlow is an open source software library for numerical computation using data flow graphs. It was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open source license on November 9, 2015.
Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. Below are the key features of TensorFlow.
  • Flexibility: You need to express your computation as a data flow graph to use TensorFlow. It is a highly flexible system which provides multiple models or multiple versions of the same model can be served simultaneously. The architecture of TensorFlow is highly modular, which means you can use some parts individually or can use all the parts together. Such flexibility facilitates non-automatic migration to new models/versions, A/B testing experimental models, and canarying new models.
  • Portability: TensorFlow has made it possible to play around an idea on your laptop without having any other hardware support. It runs on GPUs, CPUs, desktops, servers, and mobile computing platforms. You can deploy a trained model on your mobile as a part of your product, and that’s how it serves as a true portability feature.
  • Auto Differentiation: It has automatic differentiation capabilities which benefits gradient based machine learning algorithms. You can define the computational architecture of your predictive model, combine it with your objective function and add data to it- TensorFlow manages derivatives computing processes automatically. You can compute the derivatives of some values with respect to some other values results in graph extension and you can see exactly what’s happening.
  • Performance: TensorFlow allows you to make the most of your available hardware with its advanced support for threads, asynchronous computation, and queues. Just assign compute elements of your TensorFlow graph to different devices and let it manage the copies itself. It also facilitates you with the language options to execute your computational graph. TensorFlow iPython notebook helps in keeping codes, notes, and visualization in a logically grouped and interactive style.
  • Research and Production: It can be used to train and serve models in live mode to real customers. To put it simply, rewriting codes is not required and the industrial researchers can apply their ideas to products faster. Also, academic researchers can share codes directly with greater reproducibility. In this way it helps to carry out research and production processes faster.

Comments

Archive

Contact Form

Send