Friday, 23 March 2018

IBM Announced MAX - Model Asset eXchange: an App Store for Machine Learning Models

Over at the IBM Blog, Dr. Angel Diaz writes that the company has just launched the Model Asset eXchange. MAX is effectively an App Store for free Machine Learning models to help developers and data scientists easily discover, rate and deploy AI.

IBM states on the official blog--
We are excited to announce one of the first initiatives in this space — an open source enterprise Model Asset eXchange, or “MAX”. MAX is a models app store that aims to ignite a community of data scientists and AI developers, enabling them to easily discover, rate, and deploy machine learning models. MAX is a one-stop exchange for data scientists and AI developers to consume models created using their favorite machine learning engines, like TensorFlow, PyTorch, and Caffe2, and provides a standardized approach to classify, annotate, and deploy these models for prediction and inferencing, including an increasing number of models that can be deployed and customized in IBM’s recently announce AI application development platform, Watson Studio.

This message is very much echoed by Jim Zemlin, Executive Director of The Linux Foundation. “IBM’s release of Model Asset eXchange gives developers a new source for deep learning models. We look forward to seeing the innovation unlocked from the MAX community through collaboration with our forthcoming Acumos AI community.”

Blogger Widgets

WhatsApp will debut on JioPhone: Tech News

WhatsApp has been continuously bringing new features to its users on both Android and iOS platforms over the past few weeks. Now WhatsApp is possibly working to bring its real-time messenger app to the emerging KaiOS. The proof of that has been found on one of the latest WhatsApp beta versions for Windows Phone, which suggests that new platforms could soon be utilising WhatsApp servers.
That new platform is expected to be KaiOS, which has recently got Facebook’s native app for the JioPhone. The JioPhone utilises KaiOS as its operating system and supports 4G VoLTE services along with support for Wi-Fi and Bluetooth. It’s not yet known whether WhatsApp could be coming with a majority of its features on the platform or be simply limited to basic messaging service. However, with Facebook already available on the JioPhone, the availability of WhatsApp on the feature phone could give a boost to the JioPhone’s sales and could find favour with more users looking to simplify their phone usage.

Microsoft Blazor 0.1.0 Released

Microsoft has recently released the first public preview of a new .NET web framework, Blazor. This new .NET web framework uses C#/Razor and HTML using the browser with WebAssembly. Blazor enables the full stack web development with the productivity of .NET but, as this is an alpha version it should not be used in production.


Microsoft states in the official blog –
“In this release, we've laid the groundwork for the Blazor component model and added other foundational features, like routing, dependency injection, and JavaScript interop. We've also been working on the tooling experience so that you get great Intelli-Sense and completions in the Razor editor. Other features that have been demonstrated previously in prototype form, like live reload, debugging, and prerendering, have not been implemented yet, but are planned for future preview updates. Even so, there is plenty in this release for folks to start kicking the tires and giving feedback on the current direction.”

What makes you happy about working on a Friday evening?

Weekends is the only reasons which makes person happy about working on a Friday evening. It is totally to discuss it because taking break from work is your right. So honestly you can discuss your points and views with your interviewer.

Sunday, 4 March 2018

Machine Learning Interview Questions Answers: Scope of Machine Learning

Many of us believe that machine learning is a very futuristic thing. However, it is increasingly present in our lives, whether when a Google computer plays an incredible game of Go, or when Gmail generates automatic responses. While all this sounds exciting, many of us continue to wonder what machine learning exactly consists of, why it is so important or why identifying a dog in a photo is not as simple as it seems. In order to analyze all this, we have met with Maya Gupta, a researcher at Google in this field.

Let's start with the simplest: what exactly is machine learning?
It is the process of machine learning that takes a set of examples, discovers the patterns behind them and uses them to make predictions about new examples.

Think, for example, about movie recommendations. Let's suppose that a billion people tell us their ten favorite movies. That would be the set of examples that the computer can use to discover which movies are common among that group of people. Next, the computer elaborates patterns to explain those examples, such as "People who like horror movies do not like romantic ones, but they do like movies in which the same actors appear". Therefore, if you tell the computer that you like The Shining of Jack Nicholson, you can deduce if you also like romantic comedy.

We understand it or almost. However, how does this translate into practice?
In practice, the patterns that the machine learns can be very complex and difficult to explain with words. Let's use Google Photos as an example, which allows you to search your photos for pictures of dogs. How does Google do it? Well, first, we use a set of examples of photos labeled "dog" (thanks to the Internet). We also use a set of photos labeled "cat", as well as photos with millions of different labels, but I will not list them all here.

Next, the computer looks for patterns of pixels and patterns of colors that help you to find out if it is a cat, a dog or anything else. First, he makes a random estimate of the patterns that might be adequate to identify dogs. Then examine an example of an image of a dog and see if its patterns fit correctly. If you check that a cat is mistakenly called a cat, make some adjustments to the patterns used. Then examine the image of a cat and refine its patterns to try to get the most accurate. This process is repeated about a billion times: it examines an example and, if the pattern is not correct, it changes it to improve the result of it.

In the end, the patterns constitute a model of machine learning, as if it were a deep neural network that can (almost) correctly identify dogs, cats, firemen and many other things.


That sounds very futuristic. What other Google products currently use machine learning?
Google is using machine learning in many new projects, such as Google Translate, which can take a picture of a signage sign or restaurant menu in a language, discover the words and language that appear in the photo, and translate them by magic and in real time to your language.

You can also send almost any voice message to Google Translate and speech recognition can be processed by machine learning. Speech recognition technology is used in other Google products, for example, to make voice queries in the Google application or to search for videos more easily on YouTube.

{% include "www.google.com/about/views/main/machine-learning-qa/translate-demo/index.html"%}

Machine learning is the same as artificial intelligence?
Although in reality the meaning of these concepts can vary according to people, artificial intelligence (AI) is basically a broad term that refers to software that tries to solve problems that are simple for humans, such as describing what happens in an image. One of the most incredible things that humans do with ease is to learn from the examples. This is what machine learning programs are trying to do: teaching computers how to learn from examples.

The best thing is that when we discovered how to develop these software, we can expand this knowledge to handle data very quickly and solve really complex problems such as, for example, play as an expert to the Go, tell the way to all users so Simultaneously, optimize energy consumption nationwide and, my favorite, find the best results in the Google search engine.

So, why is Google now giving so much importance to machine learning?
Machine learning is not something new, since it goes back to the eighteenth century statistics, but it is true that lately it is booming and this is due to three reasons that I will explain below.

The first one is that we need an immense amount of examples to teach computers how to make good predictions, even about things that you and I consider easy (like finding a dog in a photo). With all the activity on the Internet, we now have a broader source of examples that computers can use. For example, there are now millions of pictures of dogs with the "dog" tag on websites around the world and in all languages.

But it is not enough to have many examples. You can not simply show a bunch of dog photos to a webcam and expect him to learn everything; the computer needs a learning program. In fact, lately the sector (and also Google) has made important advances in terms of the complexity and power that these learning programs can have.

However, our programs are still not perfect and computers are still not very intelligent, so we have to see many examples numerous times to change the digital controls and get accurate results. Although this requires enormous processing capacity, new advances in software and hardware have also made this possible.

Is there something that computers can not do today but what can they do in the future thanks to machine learning?
Until now, the voice recognition tried to detect only ten different digits when you said your credit card number by phone. Voice recognition has achieved incredible advances in the last five years with the use of machine learning, and now we can use it to search Google and every time more quickly.

I think machine learning can also help us improve our appearance. I do not know about you, but I hate to try on clothes. If I find a jeans brand that suits me, I buy five. Well, machine learning can convert examples of brands that we feel good into recommendations of other clothes that could equally well fit. This is out of Google's reach, but I hope someone is investigating it.

What will machine learning be like in ten years?
The sector is currently working to achieve faster learning from fewer examples. One way to address this (something that Google is emphasizing) is to provide our machines with more common sense, what in the sector is called "regularization".

What is common sense for a machine?
In general, it means that if an example changes only a little, the machine should not omit it. That is, a picture of a dog wearing a cowboy hat is still a dog.

We impose this kind of common sense when we get that machine learning is able to obviate small and insignificant changes, like a cowboy hat. Although this seems easy to say, if we make a mistake, it is possible that a machine does not detect important changes well. It is about reaching a balance and we are still working to achieve it.

For you, what is the most exciting thing about machine learning? What motivates you to work with him?
I grew up in Seattle, where we learned a lot about the first explorers in the Western United States, such as Lewis and Clark. Machine learning has that same spirit of exploration, since we discover things for the first time and try to trace a path towards a great future.

If you could put a slogan to Google's machine learning, what would it be?
If you do not get it first, try a billion times more.

Thursday, 21 December 2017

AWS\MICROSOFT Gluon Vs GOOGLE TensorFlow | INTERVIEW QUESTION

GOOGLE TENSORFLOW VS AWS/MICROSFT GLUON
(most important interview question)


Gluon AWS/Microsoft:
Gluon is a new open source deep learning interface launched by AWS and Microsoft(12 Oct 2017) which allows developers to more easily and quickly build machine learning models, without compromising performance.
Gluon can be used with either Apache MXNet or Microsoft Cognitive Toolkit, and will be supported in all Azure services, tools and infrastructure. Gluon offers an easy-to-use interface for developers, highly-scalable training, and efficient model evaluation–all without sacrificing flexibility for more experienced researchers. For companies, data scientists and developers Gluon offers simplicity without compromise through  high-level APIs and pre-build/modular building blocks, and more accessible deep learning.


Gluon makes it easy for developers to learn, define, debug and then iterate or maintain deep neural networks, allowing developers to build and train their networks quickly. Gluon introduces four key innovations. 
  • Simple, Easy-to-Understand Code: Gluon is a more concise, easy-to-understand programming interface compared to other offerings, and that it gives developers a chance to quickly prototype and experiment with neural network models without sacrificing performance. Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
  • Flexible, Imperative Structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
  • Dynamic Graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
  • High Performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides.

TensorFlow Google:
TensorFlow is an open source software library for numerical computation using data flow graphs. It was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open source license on November 9, 2015.
Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. Below are the key features of TensorFlow.
  • Flexibility: You need to express your computation as a data flow graph to use TensorFlow. It is a highly flexible system which provides multiple models or multiple versions of the same model can be served simultaneously. The architecture of TensorFlow is highly modular, which means you can use some parts individually or can use all the parts together. Such flexibility facilitates non-automatic migration to new models/versions, A/B testing experimental models, and canarying new models.
  • Portability: TensorFlow has made it possible to play around an idea on your laptop without having any other hardware support. It runs on GPUs, CPUs, desktops, servers, and mobile computing platforms. You can deploy a trained model on your mobile as a part of your product, and that’s how it serves as a true portability feature.
  • Auto Differentiation: It has automatic differentiation capabilities which benefits gradient based machine learning algorithms. You can define the computational architecture of your predictive model, combine it with your objective function and add data to it- TensorFlow manages derivatives computing processes automatically. You can compute the derivatives of some values with respect to some other values results in graph extension and you can see exactly what’s happening.
  • Performance: TensorFlow allows you to make the most of your available hardware with its advanced support for threads, asynchronous computation, and queues. Just assign compute elements of your TensorFlow graph to different devices and let it manage the copies itself. It also facilitates you with the language options to execute your computational graph. TensorFlow iPython notebook helps in keeping codes, notes, and visualization in a logically grouped and interactive style.
  • Research and Production: It can be used to train and serve models in live mode to real customers. To put it simply, rewriting codes is not required and the industrial researchers can apply their ideas to products faster. Also, academic researchers can share codes directly with greater reproducibility. In this way it helps to carry out research and production processes faster.

Saturday, 16 December 2017

PyTorch Interview Questions Answers




What is PyTorch?
PyTorch is a relatively new deep learning framework that is fast becoming popular among researchers. Like Chainer, PyTorch supports dynamic computation graphs, a feature that makes it attractive to researchers and engineers who work with text and time-series.
PyTorch provides two high-level features:
  • Tensor computation (like numpy) with strong GPU acceleration
  • Deep Neural Networks built on a tape-based autograd system
We will come with more interview questions soon.

Top 20 Google TensorFlow Interview Questions Answers

As you know now technology moving towards Machine learning/Deep learning so if you are making career in it then your future is bright, you will got very high package :). Here we come with Google TensorFlow interview questions, our previous article "100 Machine learning Interview Q-A" was become very famous so must go through it.
Future of TensorFlow: TensorFlow future bright , TensorFlow is growing fast, especially when someone like Google create such thing, its likely to be big, one of the reason is because they are using in their own products which encourages others to use it also.

So lets start with TensorFlow Interview Questions.

What is TensorFlow?
TensorFlow is a Python library for fast numerical computing created and released by Google.
It is a foundation library that can be used to create Deep Learning models directly or by using wrapper libraries that simplify the process built on top of TensorFlow.
Unlike other numerical libraries intended for use in Deep Learning like Theano, TensorFlow was designed for use both in research and development and in production systems.
It can run on single CPU systems, GPUs as well as mobile devices and large scale distributed systems of hundreds of machines. Answer credit goest to : Jason Brownlee

What is necessary to evaluate any formulas in Tensorflow?
with tf.Session() as sess:

How do you sum the whole array into one number in tensorflow?
tf.reduce_sum(array) or tf.reduce_sum(x, [0, 1])

What programming language is it for?
The API is nominally for the Python programming language, although there is access to the underlying C++ API.

What do you understand by Tensor and Flow in case of Tensorflow?
Tensor:
  • Multi dimensional array - e.g., scalar, vector, matrix, cube
Flow:
  • A graph that defines operations like + to do with data (tensors). 
  • A lot like numpy
  • Fast Math with tensors
What is nodes?
Nodes perform computation and have zero or more inputs and outputs. Data that moves between nodes are known as tensors, which are multi-dimensional arrays of real values.

What is edges?
The graph defines the flow of data, branching, looping and updates to state. Special edges can be used to synchronize behavior within the graph, for example waiting for computation on a number of inputs to complete.

What is the purpose of tf.Session?
It provides a class for running Tensorflow objects. It encapsulates the environment in which Operation objects are executed and Tensor objects are evaluated.
usage:
with tf.Session as sess:
(context manager)
or
sess = tf.Session()
which then requires
sess.run(...)
sess.close()

What happens when you create a variable?
You pass a tensor into the Variable() constructor. You must specify the shape of the tensor which becomes the shape of the variable. Variables generally have fixed shape.

When you will use tf.get_variable()?
Sometimes you have large sets of variables in complex models that you want to all initialize in the same place.

What does tf.get_variable() do?
It creates or returns a variable with a given name instead of a direct call to tf.Variable(). It uses an initializer instead of calling tf.Variable directly

What does the softmax cross entropy function do?
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class).

Friday, 8 December 2017

TCS NEXTSTEP 2018 Off Campus Drive Eligibility Criteria and Application Process Step by Step

TCS 2018 OFF CAMPUS DRIVE ELIGIBILITY CRITERIA FOR FRESHER OR 2018, 2017, 2016  PASSOUT

Good news in this post is that we have covered every step with prient screen for apply in TCS off campus drive process. We have also shown instruction of every registration and application pages, marked by TCS for 2018. So read this post till end you will enjoy step by step registration and application form process. 
if you have already registed/applied then you can see and read latest form for application for 2018 and check that is this different from your application form or not. This is the latest application form look in nextstep.tcs.com.

The TCS Eligibility Criteria for an entry level position, is as follows:

  • Engineering students scheduled to graduate in the year 2018 are eligible to apply.
  • BE / B.Tech / ME / M.Tech in any Disciplines.
  • MCA with BSc / BCA / BCom / BA (with Math / Statistics Background).
  • M.Sc in Computer Science / Information Technology.
Marks Criteria:
  • Minimum aggregate (aggregate of all subjects in all semesters) marks of 60% or above in the first attempt in each of your Class Xth, Class XIIth, Diploma (if applicable), Graduation and Post-Graduation examination which includes successful completion of your final year/semester examination without any pending arrears/back logs during the entire course duration. Please note that all subjects mentioned on the mark sheets should be taken into consideration while calculating the aggregate marks. For example, best of 5/6 subjects for calculating the aggregate is not acceptable as per the TCS Eligibility Criteria.
  • First attempt implies that you should pass the final year/semester examination (Xth, XIIh, Diploma, Graduation and Post-Graduation as applicable) with minimum aggregate (aggregate of all subjects in all semesters) marks of 60% and above within the first attempt itself. For example, if you have secured 58.9 % (aggregate of all subjects) in your Standard XIIth examination and you have taken an improvement exam in the next attempt securing 62 %, you are not eligible as per the TCS Eligibility Criteria, as improvement exam is not considered as first attempt.
  • Completion of all courses Class Xth onwards within the stipulated time as specified by your University/Institute without any extended education.
Gaps-Break Criteria:
  • It is mandatory to declare the gaps/arrears/backlogs, if any, during your academic and work experience. Break in education should not be due to extended education. Any break in education should not exceed 24 months.
  • Only Full time courses will be considered.
Age Criteria:
  • You should be minimum 18 years of age to be eligible to apply for the TCS Selection process.
Re-apply Criteria:
  • Candidates who have applied to TCS and have not been successful in clearing the TCSL selection process are not eligible to re-apply to TCS within six months from the date on which they had attended such selection test and/or interview. You are not eligible to appear for the TCS selection process within six months of the previous unsuccessful attempt.
How do a Fresher can apply for an entry level position in TCS? 
To apply for an entry level position:(click on images in zoom in)

STEP 1: Register
  • Visit https://nextstep.tcs.com
  • Click “Register Now”

  • Now select category as IT.

  • You will be redirected to the TCS Registration form. Now fill it and click on submit button.

  • After submit below popup will appear. Now verify your mobile no and email address.
  • Now below window will appear, So note-down your reference no. And click Continue.
  • Now password prompt will appear.
  • Now be happy you have registered and redirected to home page will login.
Below is the welcome message from TCS, read it carefully.

"Welcome aboard on TCS NextStep portal! 

TCS NextStep Portal is the first step connecting you with TCS, Asia's leading IT services Company. A single platform that addresses all your needs interactively and simplifies the communication process, this Portal will help you in your transition from being a student on campus to exploring a dynamic career path with TCS. 

From keeping you updated on TCS initiatives to answering your queries and helping you explore a world of opportunities, TCS NextStep helps bridge the distance in your journey to becoming a TCSer. 

So, go ahead! Explore opportunities. Experience Certainty."

TILL NOW YOU HAVE REGISTERED.

Now you can apply.


STEP 2: Apply

  • Click on Application 
  • Now application page will appear with Important information, Before click "Start filling the Form" button must read it. Below are the instruction. And click "Start filling the Form"
"1). The form is divided into following four sections. It is mandatory to enter details in all four sections.
                Personal Detail
                Academic and Work Experience Details
                Other Details
                Form preview and declaration

2). Fields marked with "*" in these sections are mandatory.

3). To save the details and navigate to the next field/screen, click 'Save and Continue'.

4). To submit the form, click 'Submit Application Form' in 'Form Preview and Declaration' section.

5). Please review the details properly before submitting the form to avoid errors.You can use the Application Form preview feature after filling in all the mandatory fields. In case you wish to edit any details, you can navigate to the relevant section and edit the same.

6). Click 'Save' after editing any details in the form. To submit the form with the updated details, click 'Submit Application Form'. Please note that if you do not submit the form after editing any details, the details will not be saved."
  • Now four tab will appear, fill detail in all four tab, below are the screen-shots of all tabs.

Academic Instructions :
  1. "Marks/CGPA Obtained" denotes Total Marks/CGPA secured by you in ALL* subjects in all semesters in the first attempt.
  2. "Total Marks/CGPA" denotes total of maximum marks in ALL* subjects in all semesters in the first attempt. *ALL implies that all subjects mentioned on the marksheet (including languages, optional subjects etc) should be taken into consideration for calculating the obtained/total marks/CGPA.
  3. Marks/CGPA obtained during the normal duration of the course only will be considered to decide on the eligibility.
  4. Verify your marks after entering, as it is a part of the selection criteria.
  5. Please mention only your XII duration in XII Grade details . Pls do not add the XI duration in the same.



  • Now this is the last button :) click "Submit Application Form" button and be happy....
  • Below are the TCS Terms and Conditions which you can read before click "Submit Application Form"

TCS Terms and Conditions
In connection with my application to render services to Tata Consultancy Services Ltd (the "Company"), I hereby agree as follows: I certify that the information furnished in this form as well as in all other forms filled-in by me in conjunction with my traineeship is factually correct and subject to verification by TCS including Reference Check and Background Verification.
I accept that an appointment given to me on this basis can be revoked and/ or terminated without any notice at any time in future if any information has been found to be false, misleading, deliberately omitted/ suppressed.

As a condition of Company's consideration of my application for traineeship with the Company, I hereby give my consent to the Company to investigate or cause to be investigated through any third parties my personal, educational and pre or post joining history. I understand that the background investigation will include, but not be limited to, verification of all information given by me to the Company. I confirm that the Company is entitled to share such investigation report with its clients to the extent necessary in connection with the Services, which I may be required to provide to such clients. I confirm and undertake that the Company shall incur no liability or obligation of any nature whatsoever resulting from such investigation or sharing of the investigation results as above. I certify that I am at present in sound mental and physical condition to undertake employment with TCS. I also declare that there is no criminal case filed against me or pending against me in any Court of law in India or abroad and no restrictions are placed on my travelling anywhere in India or abroad for the purpose of business of the company.

Best of luck..

Friday, 1 December 2017

SQL Server 2017 Interview Questions with Answers

Here we come with latest SQL Server interview questions which is related to latest SQL Server 2017 thats why all questions are also latest if you are looking for latest then this is the place. :)

Lets check your knowledge..

What do you understand by Adapative query processing launched in SQL Server 2017?
SQL Server 2017 and Azure SQL Database introduce a new generation of query processing improvements that will adapt optimization strategies to your application workload’s runtime conditions.

Name all three Adaptive query processing features?
In SQL Server 2017 and Azure SQL Database there are three adaptive query processing features by which you can improve your query performance:
Batch mode memory grant feedback.
Batch mode adaptive join.
Interleaved execution.

Write T-SQL statement to enable adaptive query processing?
You can make workloads automatically eligible for adaptive query processing by enabling compatibility level 140 for the database. You can set this using Transact-SQL. For example:
ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 140;

Name the new string function which is very useful to generate csv file from a table?
CONCAT_WS is new function launched in SQL Server 2017 its takes a variable number of arguments and concatenates them into a single string using the first argument as separator. It requires a separator and a minimum of two arguments.
It is very helpful in generate comma or pipe seprated csv file content.
Example:


What do you understand  by TRANSLATE in SQL Sever 2017?
TRANSLATE is a new string function launched in SQL Server 2017, It is very helpful to replace multiple character with multiple character respectively. It will return an error if characters and translations have different lengths.
In below example we are using traditional REPLACE function, and for same task we will use TRANSLATE function lets see the difference.

What is the use of new TRIM function?
It Removes the space character char(32) or other specified characters from the start or end of a string.

Is SQL Server 2017 support Python?
Yes






Saturday, 11 November 2017

Top 100+ Machine Learning Interview Questions Answers PDF

What is the definition of learning from experience for a computer program?
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.



Explain what is Machine learning?
Machine learning is the field of study that gives computers the avility to learn without being explicitly programmed.
OR
The acquisition of knowledge or skills through study or experience by a machine.
OR
The ability for machines to learn without being explicitly programmed.

What are different types of learning?

  • Supervised learning

  • Unsupervised learning

  • Semisupervised learning

  • Reinforcement learning

  • Transduction
  • Learning to Learn
What are the differences between artificial intelligence, machine learning and deep learning?
Artificial Intelligence: artificial intelligence. As the name implies, it means to produce intelligence in artificial ways, in other words, using computers.

Machine Learning: This is a sub-topic of AI. As learning is one of the many functionalities of an intelligent system, machine learning is one of the many functionalities in an AI.
Deep learning: Deep learning is the specific sub-field in machine learning involving making very large and deep (i.e. many layers of neurons) neural networks to solve specific problems. It is the current “model of choice” for many machine learning applications.


What are some popular algorithms of Machine Learning?
Decision Trees
Neural Networks (back propagation)
Probabilistic networks
Nearest Neighbor
Support vector machines(SVM)

What are the three most important components of every machine learning algorithm?
Representation: How to represent knowledge. Examples include decision trees, sets of rules, instances, graphical models, neural networks, support vector machines, model ensembles and others.
Evaluation: The way to evaluate candidate programs (hypotheses). Examples include accuracy, prediction and recall, squared error, likelihood, posterior probability, cost, margin, entropy k-L divergence and others.
Optimization: The way candidate programs are generated known as the search process. For example combinatorial optimization, convex optimization, constrained optimization.

Explain Supervised learing?
In unsupervised learning we only have xi values, and also have explicit target labels.


Explain Unsupervised learing?
In unsupervised learning we only have xi values, but no explicit target labels.

Difference between Supervised and Unsupervised learning?

What type of algorithm used in Supervised and Unsupervised learning?


Explain classification?
In machine learning, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.

What is reinforcement Learning?
The goal is to develop a system (agent) that improves its performance based on interactions with the environment. Since the information about the current state of the environment typically also includes a so-called reward signal, we can think of reinforcement learning as a field related to supervised learning. However, in reinforcement learning this feedback is not the correct ground truth label or value, but a measure of how well the action was measured by a reward function. Through the interaction with the environment, an agent can then use reinforcement learning to learn a series of actions that maximizes this reward via an exploratory trial-and-error approach or deliberative planning. A popular example of reinforcement learning is a chess engine.

Describe the relationship between all types of machine learning and particularly the application of unsupervised.
In supervised learning, we know the right answer beforehand when we train our model, and in reinforcement learning, we define a measure of reward for particular actions by the agent. In unsupervised learning, however, we are dealing with unlabeled data or data of unknown structure. Using unsupervised learning techniques, we are able to explore the structure of our data to extract meaningful information without the guidance of a known outcome variable or reward function.

What do you understand by Cost Function in supervised learning problem? How it help you?
Cost function takes an average difference of all the results of the hypothesis with inputs from x's and the actual output y's. and help to figure best straight line to our data.


What is a support vector machine?
Maximize the minimum distance of all errors.
e.g.
The dish is good by itself but to enhance the dish, you put the most amount of salt in a dish without it tasting too salty

What do we call a learning problem, if the target variable is continuous?
When the target variable that we're trying to predict is continuous, the learning problem is also called a regression problem.

What do we call a learning problem, if the target variable can take on only a small number of values?
When y can take on only a small number of discrete values, the learning problem is also called a classification problem.

Explain classifiers?


What do you understand by hypothesis space?
It is a set of legal hypothesis.


How do we measure the accuracy of a hypothesis function?
We measure the accuracy by using a cost function, usually denoted by J.

Describe variance and bias in what they measure?
Variance measures the consistency (or variability) of the model prediction for a particular sample instance if we would retrain the model multiple times, for example, on different subsets of the training dataset. We can say that the model is sensitive to the randomness in the training data. In contrast, bias measures how far off the predictions are from the correct values in general if we rebuild the model multiple times on different training datasets, bias is the measure of the systematic error that is not due to randomness.

Describe the benefits of regularization?
One way of finding a good bias-variance tradeoff is to tune the complexity of the model via regularization. Regularization is a very useful method to handle collinearity (high correlation among features), filter out noise from data, and eventually prevent overfitting. The concept behind regularization is to introduce additional information (bias) to penalize extreme parameter weights.

Explain Random forest?
Random forest is a collection of trees, hence the name 'forest'! Each tree is built from a sample of the data. The output of a RF is the model of the classes (for classification) or the mean prediction (for regression) of the individual trees.

What are training algorithms in Machine learning?
Training algorithms gives a model h with Solution Space S and a training set {X,Y}, a learning algorithm finds the solution that minimizes the cost function J(S)

Explain Training and Testing phase in Machine learning?


Explain Local minima?
The smallest value of the function. But it might not be the only one.

What is multivariate linear regression?
Linear regression with multiple variables.

How can we speed up gradient descent?
We can speed up gradient descent by having each of our input values in roughly the same range.

What is the decision boundary given a logistic function?
The decision boundary is the line that separates the area where y = 0 and where y = 1. It is created by our hypothesis function.

What is underfitting? p
Underfitting, or high bias, is when the form of our hypothesis function h maps poorly to the trend of the data.

What usually causes underfitting?
It is usually caused by a function that is too simple or uses too few features.

What is overfitting? p
Overfitting, or high variance, is caused by a hypothesis function that fits the available data but does not generalize well to predict new data.

What usually causes overfitting?
It is usually caused by a complicated function that creates a lot of unnecessary curves and angles unrelated to the data.

How can we avoid overfitting?
Stop growing when data split is no more statistically significant OR grow tree & post-prune.

What is features scaling?
Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.

What are the advantages of data normalization?
Few advantages of normalizing the data are as follows:
1. It makes your training faster.
2. It prevents you from getting stuck in local optima.
3. It gives you a better error surface shape.
4. Wweight decay and bayes optimization can be done more conveniently.

What range is used for feature scaling
0-1

What is the formula for feature scaling?
(x-xmin)/(xmax-xmin)

What two algorithms does features scaling help with?
K-means and SVM RBF Kernal

What two algorithms does features scaling NOT help with?
Linear regression and decision trees

What do you understand by clustering?
Clustering means grouping of data or dividing a large data set into smaller data sets of some similarity.

What is a good clustering algorithm?
K-means

In a basic sense, what are neurons?
Neurons are basically computational units that take inputs, called dendrites, as electrical inputs, called "spikes", that are channeled to outputs , called axons.

What is a neural network?
Takes an input layer -> hidden layer of logistic regression -> outputs of the hidden layer are binary that go to the output layer
e.g.
is that a house cat? input layer is whiskers, fur, paws, large. Hidden layer finds that cats are small (so the output of the hidden layer is 1, 1, 1 ,0). Because not all features (outputs) from the hidden layers are true, it's not a house cat.


What is Regression Analysis?
We are given a number of predictor (explanatory) variables and a continuous response variable (outcome), and we try to find a relationship between those variables that allows us to predict an outcome.

What are the dendrites in the model of neural networks?
In our model, our dendrites are like the input features.

What are the axons in the model of neural networks?
In our model, the axons are the results of our hypothesis function.

What is the bias unit of a neural network?
The input node x0 is sometimes called the "bias unit." It is always equal to 1.

What are the weights of a neural network?
Using the logistic function, our "theta" parameters are sometimes called "weights".

What is the activation function of a neural network?
The logistic function (as in classification) is also called a sigmoid (logistic) activation function.

How do we label the hidden layers of a neural network?
We label these intermediate or hidden layer nodes. The nodes are also called activation units.

What is the kernal method?
When you can't use logistical regression because there isn't a clear delineation between the two groups, you need to draw a curved line. Multiply x & y to separate groups on a 3d plane.
e.g.
monkey in the middle. If there are two people on either side of the person in the middle, how do you draw a straight line to separate the two groups (e.g. logistical regression)? You can't. You have to draw a curved line.

What's the motivation for the kernel trick?
To solve a nonlinear problem using an SVM, we transform the training data onto a higher dimensional feature space via a mapping function and train a linear SVM model to classify the data in this new feature space. Then we can use the same mapping function. to transform new, unseen data to classify it using the linear SVM model.
However, one problem with this mapping approach is that the construction of the new features is computationally very expensive, especially if we are dealing with high-dimensional data. This is where the so-called kernel trick comes into play

Give the setup of using a neural network.
• Pick a network architecture.
• Choose the layout of your neural network.
• Number of input units; dimension of features x i.
• Number of output units; number of classes.
• Number of hidden units per layer; usually more the better.

How does one train a neural network?
1. Randomly initialize the weights.
2. Implement forward propagation.
3. Implement the cost function.
4. Implement backpropagation.
5. Use gradient checking to confirm that your backpropagation works.
6. Use gradient descent to minimize the cost function with the weights in theta.

How can we break down our decision process deciding what to do next? 
• Getting more training examples: Fixes high variance.
• Trying smaller sets of features: Fixes high variance.
• Adding features: Fixes high bias.
• Adding polynomial features: Fixes high bias.
• Decreasing lambda: Fixes high bias.
• Increasing lambda: Fixes high variance.

What issue poses a neural network with fewer parameters?
A neural network with fewer parameters is prone to underfitting.

What issue poses a neural network with more parameters?
A large neural network with more parameters is prone to overfitting.

What is the relationship between the degree of the polynomial d and the underfitting or overfitting of our hypothesis?
• High bias (underfitting): both J train(Θ) and J CV(Θ) will be high. Also, J CV(Θ) is approximately equal to J train(Θ).
• High variance (overfitting): J train(Θ) will be low and J CV(Θ) will be much greater than J train(Θ).

Describe Logistic Regression vs SVM.
In practical classification tasks, linear logistic regression and linear SVMs often yield very similar results. Logistic regression tries to maximize the conditional likelihoods of the training data, which makes it more prone to outliers than SVMs. The SVMs mostly care about the points that are closest to the decision boundary (support vectors). On the other hand, logistic regression has the advantage that it is a simpler model that can be implemented more easily. Furthermore, logistic regression models can be easily updated, which is attractive when working with streaming data.

Give an overview of the decision tree process.
We start at the tree root and split the data on the feature that results in the largest information gain (IG). In an iterative process, we can then repeat this splitting procedure at each child node until the leaves are pure. This means that the samples at each node all belong to the same class.

Describe parametric vs nonparametric models?
Machine learning algorithms can be grouped into parametric and nonparametric models. Using parametric models, we estimate parameters from the training dataset to learn a function that can classify new data points without requiring the original training dataset anymore. Typical examples of parametric models are the perceptron, logistic regression, and the linear SVM. In contrast, nonparametric models can't be characterized by a fixed set of parameters, and the number of parameters grows with the training data. Two examples of nonparametric models that we have seen so far are the decision tree classifier/random forest and the kernel SVM.

What is feature extraction?
A method to transform or project the data onto a new feature space. In the context of dimensionality reduction, feature extraction can be understood as an approach to data compression with the goal of maintaining most of the relevant information.

Explain PCA in a nutshell.
It aims to find the directions of maximum variance in high-dimensional data and projects it onto a new subspace with equal or fewer dimensions that the original one. The orthogonal axes (principal components) of the new subspace can be interpreted as the directions of maximum variance given the constraint that the new feature axes are orthogonal to each other

What is Exploratory Data Analysis?
(EDA) is an important and recommended first step prior to the training of a machine learning model. For example, it may help us to visually detect the presence of outliers, the distribution of the data, and the relationships between features.

What is word stemming?
The process of transforming a word into its root form that allows us to map related words to the same stem

What is OLS?
Ordinary Least Squares (OLS) method is to estimate the parameters of the regression line that minimizes the sum of the squared vertical distances (residuals or errors) to the sample points.

What are residual plots?
Since our model uses multiple explanatory variables, we can't visualize the linear regression line (or hyperplane to be precise) in a two-dimensional plot, but we can plot the residuals (the differences or vertical distances between the actual and predicted values) versus the predicted values to diagnose our regression model. Those residual plots are a commonly used graphical analysis for diagnosing regression models to detect non-linearity and outliers, and to check if the errors are randomly distributed.

What is the elbow method?
A graphical technique to estimate the optimal number of clusters k for a given task. Intuitively, we can say that, if k increases, the distortion (within-cluster SSE) will decrease. This is because the samples will be closer to the centroids they are assigned to. The idea behind the elbow method is to identify the value of k where the distortion begins to increase most rapidly,

The two main approaches to hierarchical clustering are?
Agglomerative and divisive hierarchical clustering

What is Deep learning?
It can be understood as a set of algorithms that were developed to train artificial neural networks with many layers most efficiently.

What does the feedforward in feedforward artificial neural network mean?
Feedforward refers to the fact that each layer serves as the input to the next layer without loops, in contrast to recurrent neural networks for example.

What is gradient checking?
It is essentially a comparison between our analytical gradients in the network and numerical gradients, where a numerically approximated gradient =( J(w + epsilon) - J(w) ) / epsilon, for example.

What are Recurrent Neural Networks?
Recurrent Neural Networks (RNNs) can be thought of as feedforward neural networks with feedback loops or backpropagation through time. In RNNs, the neurons only fire for a limited amount of time before they are (temporarily) deactivated. In turn, these neurons activate other neurons that fire at a later point in time. Basically, we can think of recurrent neural networks as MLPs with an additional time variable. The time component and dynamic structure allows the network to use not only the current inputs but also the inputs that it encountered earlier.

Thursday, 14 September 2017

Tuesday, 12 September 2017

Top 12 Things Not to Say During an Interview : Interview tips

Bio:
Susan Ranford is an expert on job market trends, hiring, and business management. She is the Community Outreach Coordinator for New York Jobs. In her blogging and writing, she seeks to shed light on issues related to employment, business, and finance to help others understand different industries and find the right job fit for them.

Things Not to Say During an Interview
Interviews are places where you have to watch your tongue every second. You don’t want to say too much, the wrong thing, or ramble on incessantly. Some topics should be totally off limits during an interview. Be sure to keep these things on your mind and off your tongue at your next interview.

  • Never admit to being nervous.
Being nervous and admitting it are two different things during an interview.

You should show confidence in yourself first. Hide the case of nerves as best you can, and do not mention being nervous. The interviewer is looking for a confident candidate, and that can be you.

It may seem endearing to admit it, like you’re nervous and excited for the opportunity. But ultimately, it’s better to appear confident and in control of your emotions.

  • Never mention entrepreneurial aspirations during an interview.
Don’t tell an interviewer than you want to be your own boss.

Mentioning that you want to be your own boss puts you in a unique category that you really don’t want to be in. According to Ken Sundheim, it immediately lists you as a threat to the company because you could be there to learn trade secrets or be seen as a potential loss that leads to another day of interviewing. If you want to be hired, don’t tell them you want to work for yourself. Explain why you want to work for them.

  • Don’t be too eager to work.
If you are looking to be hired, being too eager to work can work against you.

Instead of being available for any job, be job specific when you apply. If you are willing to do anything, the interviewer may see you as desperate and not having specialized skills. If you’ll do anything, you’re not necessarily good at something. Make yourself valuable to the company by expressing yourself and your talents well.

  • Don’t Give Apologies for Lack of Experience.
If your resume doesn’t show years of experience in the industry that you are trying to break into, don’t make apologies for this lack of experience.

Build on your strengths rather than dwelling on your weak areas. This advice applies to the mid-career changer as well as the new graduate. If you don’t have the years of experience that the interviewer is requiring, mention any skills that transfer and make you the best qualified person for the job.

  • Don’t tell them to look at your resume.
If you are asked a question, be sure to answer the question.

Don’t refer the interviewer back to your resume. They want to hear an answer directly from you. You are a living, breathing person. Your resume is a piece of paper.

Reminding them that you listed it on your resume is disrespectful, and it is a definite don’t.

  • Don’t talk about your job search.
Let them know how you found them, but don’t talk about the hours you spent looking for other opportunities on local job sites. They don’t care about your job search, they care about potentially hiring you.

Move the focus to you and your skills as much as possible, not your inability to find other opportunities.

  • Don’t wait for questions you want to answer.
Rehearsing and practicing for an interview gives you confidence for answering certain questions, but don’t be listening and anticipating just those questions.

You have to be attentive and able to give answers to all of the questions that are asked. Being able to carry on a conversation without appearing to have it scripted is important to making it past round one in the interview process.

  • Don’t use clichés.
Be original with your words. Always find new and positive ways of saying things. Instead of using buzzwords and clichés, describe yourself with adjectives and phrases will showcase your creativity and ability to think independently.

The interview is the place to set yourself apart from the crowd, and your conversation is the most obvious way to do this.

  • Definitely lose the filler words.
Any word or phrase that you use to fill in a sentence while you are thinking should not be spoken during an interview. These words include ‘like’, and sounds such as ‘um’, and ‘er’. These do not help you communicate your message clearly and succinctly. You should remain silent rather than filling the gaps with these sounds.

Filler words are often words you don’t even remember saying. A pro tip is to record yourself speaking. Then, watch to see which filler words that you need to eliminate from your vocabulary.

  • Stay on topic.
If you have a tendency to ramble, put a lid on it during the interview. Feel free to be a great storyteller around your friends and family, but not during an interview unless the story you are telling is relevant to the job at hand.

If in your work experiences you achieved great success, then definitely share that story when the time is right. However, if you just can’t way to tell someone about your weekend plans, keep that story to yourself. 

  • Never ask what the company does.
If you have to ask what the company does that you are interviewing with, you haven’t done your homework. This question is the biggest turnoff for recruiters and can summarily end what was a great interview up until you asked this fatal question. If you ask this question, odds are really good that you won’t be hired.

  • Realize the power of your words.
Words are powerful, especially during an interview. Choose them carefully, and you’ll increase the odds of landing the job. If you’ve already been to an interview and have said all of the wrong things, learn from your mistakes.

If you slip up and say one of these things, realize it. Next time you’ll know what not to say.

Sunday, 10 September 2017

Top 20 Mixpanel Analytics Interview Questions with Answers

Here we come with very popular analytics tool interview questions with is know as Mixpanel. So lets start.

What is know about Mixpanel, explain?
Ans: Mixpanel is a business analytics service and company. It tracks user interactions with web and mobile applications and provides tools for targeted communication with them. Its toolset contains in-app A/B tests and user survey forms. Data collected is used to build custom reports and measure user engagement and retention. Mixpanel works with web applications, in particular SaaS, but also supports mobile apps. As of January 2016 the company had more than 230 employees, but had to fire 18 people due to overhiring.

In which format Mixpanel store Data?
Ans: JSON

Why Mixpanel even there are many tools in market?
Ans: Mixpanel provide a solution that lets businesses track the specific user actions that are important to your business questions, along with detailed information about these actions and users. We can track these metrics on any platform, and the customer gets maximum flexibility and granularity in terms of what details they track. Once the data is in, our reports let customers ask very complex, specific questions of this data.

Explain People analytics?
People analytics helps you understand and re-engage your customers,Imagine being able to understand who your users are, see what they do before or after they sign up, re-engage them with messages, and dig deep into your customer revenue. Now it’s possible, all in one place, with People Analytics.

Explain what is Segmentation?
Ans: Segmentation allows you to view top events on your app and easily break down complex events in mixpanel. You have the ability to drill down by an unlimited number of properties to gather instant insight into these key actions on your app. You can choose any individual event in MP, compare multiple events and see total events, unique users firing these events and average number of events per user. We also give you multiple options to view this data.

Mixpanel top features to analyse your people?
Ans:
Drill into your data: With Insights and findout where to focus your resources when building your product.
Visualize your data in different ways:Smooth out noisy results to really understand what’s going on.
Discover insights quickly: When digging into complex questions, don’t get slowed down waiting for an answer.
Bookmarks: Let you save reports that you look at a lot so you can save time.

EXAMPLE QUESTIONS MIXPANEL CAN ANSWER, BY WHICH YOU CAN TAKE DECISION
(to know better about Mixpanel)
Which sources have driven the most mobile installations over time?

Which feature should I invest in further to drive up customer conversion?

Was the ROI on my latest ad social spend campaign more or less than previous campaigns?

How is Mixpanel different from Google analytics?
Ans: Mixpanel differs from Google Analytics in one major way: instead of tracking page views, it tracks the actions people take in your mobile or web application.

How can I export my people profiles into a CSV?
Ans: People profiles currently cannot be exported via the Mixpanel UI; however, you can easily export your people profiles either: Within Mixpanel using one simple query in JQL.

Explain JQL(JavaScript Query Language)?
Ans: JQL – JavaScript Query Language – uses the full power of a robust and popular programming language, JavaScript, to let you analyze your data in Mixpanel. It was designed for performance and flexibility so that developers and data scientists can pull the most valuable insights from their data with ease ‐ no matter how complex the question is.

What are the advantages of using JavaScript for analytics over SQL? JAVASCRIPT VS. SQL
Advantages of using JavaScript for analytics:

  • The full power of a programming language powered by V8 ‐ the JavaScript engine in Chrome
  • Easily express & compose queries that are more understandable
  • A modern & popular programming language amongst developers to quickly get started
  • Flexible to use with unstructured, schema less data

Disadvantages of using SQL for analytics:

  • Meant for rigid schemas for traditional relational databases
  • Difficult to manipulate and transform the data
  • Complex queries become unwieldy to read & compose
  • Limited flexibility due to query functions available in SQL


How do I track a page view in Mixpanale, example?

What is distinct_id?
Ans: Mixpanel can keep track of actions in your application right down to the individual customer level. This is done using a property called distinct_id. The property can (and in most cases should) be included with every event you send to Mixpanel to tie it to a user. Distinct_id plays a vital role across most Mixpanel reporting.

Where can I find my project token?
Ans: Click your name in the upper righthand corner of your Mixpanel project and select Project settings to see your project token for only the project you’re currently viewing.

What data types does Mixpanel accept as Properties?
Ans: String
Numeric
Boolean
Date
List

Explain Activity Feed in Mixpanale?
Make better product decisions by seeing the full story of how individual customers use your product. Activity Feed puts customer behavior into an event-based timeline, so you can follow along as people experience your product, seeing where they get stuck along the way.



Wednesday, 23 August 2017

What is Cached Report in SSRS ?

What is Cached Report in SSRS ?


Cashing is a copy of last executed report and stores it in report server temp DB.

SSRS lets you enable caching for the report and maintain a copy of the processed report  in intermediate format in report server temp DB ,so that if the same report request comes again, the stored copy can be rendered in the desired format and served. This improvement in subsequent report processing can be evident especially in cases where the report is quite large and accessed frequently.

Please Note that the cashed report will continue to show the same data even if the data has changed in the Database until the cashed is refreshed. You can set the expiration date in Report Manager After expiration, a cached report is replaced with a newer version when the user selects the report again.

Thursday, 17 August 2017

What is Snapshot Report in SSRS?

What is Snapshot Report in SSRS?

A Report Snapshot in SSRS is a report that contains layout information and a data-set that is retrieved at a specific point in time. Unlike on-demand reports, which get up-to-date query results when you select them, report snapshots are processed on a schedule and then saved to a report server. When you select a report snapshot for viewing, the report server retrieves the stored report from the report server database, and shows the data and layout that were current for the report at the time the snapshot was created.

Steps to a Report SnapShot :

  • Got to Report Manager,Where RDLs are Deployed.
  • Right Click The RDL and Select Manage.
  • Then select Snapshot Options from left the pane and schedule the snapshot of the report.
 

Wednesday, 16 August 2017

How to Replace Null Values in SSRS Report ?

How to Replace Null Values in SSRS Report ?

You can replace the NULL Values with some Custom value using IIF , IsNothing Function in SSRS.

Just Right Click on TextBox on which you want to replace NULL Value and write an Expression :

=IIF(IsNothing(Fields.ColName.Value),0,Fields.ColName.Value)   [ To Replace with 0]
or
=IIF(IsNothing(Fields.ColName.Value),"Not Available",Fields.ColName.Value)   [To Replace with String]

Friday, 28 July 2017

Sunday, 16 July 2017

What is Report Builder in SSRS?


What is Report Builder in SSRS?

Report Builder is an Report authoring tool use to design ad-hoc reports and to manage the existing reports. you can preview your report in Report builder and publish your report to a reporting services . In short we can say that Report  Builder provides the capability of design, execute and deploy the SSRS reports.