Thursday, 15 December 2016

Top 100 PEGA-PRPC Interview Questions and Answers for Experienced and Freshers

Describe Automated Testing?

Automated Testing/(test automation) is the use of special software(Testing tools) to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

What do you understand by data page in Pega?
Data pages (known previous to Pega 7 as "declare pages" and "declarative pages") store data that the system needs to populate work item properties for calculations or for other processes. When the system references a data page, the data page either creates an instance of itself on the clipboard and loads the required data in it for the system to use, or responds to the reference with an existing instance of itself.

How create date page in Pega?
Using the Data Explorer, in the Designer Studio, you can quickly create a data page and specify its structure, properties, data source or sources, when and how it refreshes its data, and any parameters it accepts.
To open the Data Explorer, click Data in the explorers panel in the Designer Studio (see the image at left). The Data Explorer displays a list of all the data object types (classes) in your application, and all the data pages for each data object type.

What is difference between default locking and optimistic locking in Pega7 ?
Pega 7 provides two case locking options and capabilities. You make your basic configurations in the Case Designer at the top-level case type. The settings cascade to all the subcase types.

  • Default locking – When a case is opened in a Perform harness, it and its parent case are locked. Only one user can view and update the case at a time. You can override the default behavior at the each subcase type level.
  • Optimistic locking – Multiple users can open a case in a Perform harness at the same time to review or update it. The first user to submit an update "wins;" users who had updated the form but had not submitted changes must refresh the form, re-enter their updates, then submit them.


What do you understand by Agent in Pega?
An agent is an internal background process operating on the server that runs activities on a periodic basis.  Agents route work according to the rules in your application; they also perform system tasks such as sending email notifications about assignments and outgoing correspondence, generating updated indexes for the full-text search feature, synchronizing caches across nodes in a multiple node system, and so on.


What is difference between Standard Agent and Advanced Agent in pega?

How to export data from pega express into CSV/Excel?
You can export data records to a CSV file.
Turn editing on
Click the data icon on the left side of the screen
Click on the data type you are interested in exporting
Navigate to the records tab (second tab) for that data type
Click the export link located on the top right of the records tab. This will export a CSV file of records that you can open in Excel.

Difference between work list and work basket?
Work List: Work list contains the tasks assigned to the individual Person.
Work Basket: Work Basket will contain the tasks assigned to the group of individuals in the project.

What is an Agent in Pega?
An agent is an internal background process operating on the server that runs activities on a periodic basis.  Agents route work according to the rules in your application; they also perform system tasks such as sending email notifications about assignments and outgoing correspondence, generating updated indexes for the full-text search feature, synchronizing caches across nodes in a multiple node system, and so on.

What is difference between Standard Agent and Advanced Agent in pega?

  • Standard: The default Queue Mode setting for agents created in V5.4 is Standard.  Standard mode assumes that the transnational processing will be handled by the agent queue functionality, and that the agent activity will contain only business logic. When the Mode is set to Standard, then when the agent wakes up, it immediately checks the agent queue to see if there are any entries for that agent.  If there are, it processes the entries until either the Max Records number of entries have been processed, or the queue is empty – whichever comes first.  After this processing, the agent stops and goes “back to sleep” for its specified interval.
  • Advanced: In Advanced mode, the agent activity is again responsible for both transnational and business processing.  However, unlike the Legacy mode, the agent activity in Advanced mode may still use the agent queue functionality; it just must do so explicitly (rather than the Standard mode, where the agent queue is engaged automatically).  So when the agent “wakes up,” it runs the activity directly, and that activity may either call the agent queue, or just do processing without a queue.

Name the three queue Mode values?
1.Legacy
2.Standard
3.Advanced

Can we write Automated Unit Test in Pega Tool?

What do you understand by Offline mobility feature of Pega 7?
The Pega 7 Platform provides the ability to build offline mobile applications for mobile workers. It offers a seamless experience for a field service employee working in locations that have no network connection or whose device loses a network connection.
Mobile workers can log in, create cases, open items from their worklist, and complete assignments, all while working offline. This harnesses the Pega 7 Platform standards-based UI capabilities along with the custom mobile application. The mobile application with offline capability uses the same building blocks as all Pega applications.


What can be the scope of data page?
The data page scope can be one of the following
Node – any requestor executing on the current node can access the pages.

Thread – the page is created in a single requestor thread, and can be accessed as often as needed by processing in that thread. Access by separate requestors causes the rule to create distinct pages, which might have different contents.

Requestor – all threads for the current requestor.

SQL SERVER CASE PUZZLE QUERY

What would be the output of following query? 😄
SELECT CASE WHEN 1=1 THEN 'Vikas Ahlawat1'
            WHEN 2=2 THEN 'Vikas Ahlawat2'

            ELSE  'Vikas Ahlawat3' END AS Name

Ans:

Thursday, 8 December 2016

How you will check the Priority assigned to SQL Server management studio?

SQL SERVER PERFORMANCE TUNING TICK
If SSMS Priority is low then it will degrade SSMS performance so if your server responding slow then you must check base priority of SSMS
Priority is the weight given to a resource that pushes the processor to give it greater preference when executing. To determine the priority of a process, follow these steps:

  1. Launch Windows Task Manager.
  2. Select View ➤ Select Columns.
  3. Select the Base Priority check box.
  4. Click the OK button.

These steps will add the Base Priority column to the list of processes. Subsequently, you will be able to determine that the SQL Server process (ssms.exe) by default runs at Normal priority, whereas the Windows Task Manager process (taskmgr.exe) runs at High priority.

Wednesday, 7 December 2016

What do you understand by DAC in SQL Server?

Microsoft SQL Server provides a dedicated administrator connection (DAC). The DAC allows an administrator to access a running instance of SQL Server Database Engine to troubleshoot problems on the server—even when the server is unresponsive to other client connections. The DAC is available through the sqlcmd utility and SQL Server Management Studio. The connection is only allowed from a client running on the server. No network connections are permitted.
To use SQL Server Management Studio with the DAC, connect to an instance of the SQL Server Database Engine with Query Editor by typing ADMIN: before the server name. Object Explorer cannot connect using the DAC.

Click here for more

What is the differences in CRM Application Architecture from CRM Online to CRM On-Premises?


  • Sticking with development, on premise allows for custom developed plug-ins. Online allows for this as well, but with limitations. Plug-ins are sandboxed with limited permissions and can only make requests to same CRM tenant or to external web services. Here again, this really comes down to how much customization you intend or foresee happening via plug-ins.
  • For all you SQL gurus out there, online does not allow direct access to SQL data; while on premise does. This will limit your development of custom reports to the use of FetchXML for online, while you can use either FetchXML or direct SQL access for on premise. This could be somewhat of an issue, if you have a development staff versed in SQL, there will be a slight learning process for switching over to FetchXML.
  • CRM Online only offers Claims-Based Authentication and Security where CRM On-Premises offers either Claims-Based Authentication or ADFS. 
  • CRM On-Premises allows unlimited workflows and entities, where Online has a limit of 200 workflows and 300 entities.


Tuesday, 6 December 2016

What is Partitioning in SQL Server?

The partitioning element allows you to restrict the window to only those rows that have the same values in the partitioning attributes as the current row.

Types of Views in SQL Server?

Below are the types of View in SQL Server


  • Indexed Views:An indexed view is a view that has been materialized. This means the view definition has been computed and the resulting data stored just like a table. You index a view by creating a unique clustered index on it. Indexed views can dramatically improve the performance of some types of queries. Indexed views work best for queries that aggregate many rows. They are not well-suited for underlying data sets that are frequently updated.
  • Partitioned Views:A partitioned view joins horizontally partitioned data from a set of member tables across one or more servers. This makes the data appear as if from one table. A view that joins member tables on the same instance of SQL Server is a local partitioned view.
  • System Views:System views expose catalog metadata. You can use system views to return information about the instance of SQL Server or the objects defined in the instance. For example, you can query the sys.databases catalog view to return information about the user-defined databases available in the instance. 


What are the major problem areas that can degrade SQL Server performance?

Following are the major problem areas that can degrade SQL Server performance:

  • Poor indexing/Bad indexing
  • Inaccurate statistics
  • Poor query design
  • Poor execution plans
  • Non-set-based operations, usually T-SQL cursors
  • Poor database design
  • Excessive blocking and deadlocks
  • Excessive fragmentation
  • Nonreusable execution plans
  • Frequent recompilation of queries
  • Improper use of cursors
  • Improper configuration of the database log

Thursday, 1 December 2016

What you do to performance tune your SQL Server on regular basis in your current organization?

Below are my regular jobs to keep my SQL Server fast.


  • Identifying problematic SQL queries
  • Analyzing a query execution plan
  • Evaluating the effectiveness of the current indexes
  • Avoiding bookmark lookups
  • Evaluating the effectiveness of the current statistics
  • Analyzing and resolving fragmentation
  • Optimizing execution plan caching
  • Analyzing and avoiding stored procedure recompilation
  • Minimizing blocking and deadlocks
  • Analyzing the effectiveness of cursor use

What do you understand by DVMs in SQL Server?

The DMVs were introduced in SQL 2005, DMVs allow you to get better insight into what is happening in SQL Server.  Without these new tools a lot of the information was unavailable or very difficult to obtain.

Here are some of the more useful DMVs that you should familiarize yourself with:






  • sys.dm_exec_sessions - Sessions in SQL Server
  • sys.dm_exec_cached_plans - Cached query plans available to SQL Server
  • sys.dm_exec_connections - Connections to SQL Server

What is R Services introduced in SQL Server 2016?

SQL Server R Services. R Services (In-Database) provides a platform for developing and deploying intelligent applications that uncover new insights. You can use the rich and powerful R language and the many packages from the community to create models and generate predictions using your SQL Server data.


Top Competitors of Cynamics CRM in market?

Microsoft Dynamics CRM is undoubtedly one of the top products in the CRM space. However, following are the other products that compete Microsoft Dynamics CRM.

  • Salesforce.com
  • Oracle





  • SAP
  • Sage CRM
  • Sugar CRM
  • NetSuite

Microsft CRM versions history

Below is the Microsft CRM versions history:

Microsoft CRM 1.0 (first version)
Microsoft CRM 1.2
Microsoft Dynamics CRM 3.0
Microsoft Dynamics CRM 4.0
Microsoft Dynamics CRM 2011
Microsoft Dynamics CRM 2013
Microsoft Dynamics CRM 2015
Microsoft Dynamics CRM 2016 (Latest version)

How To Get a Job In 7 Steps Easily, Goal setup

Are you looking for a job for months? Have you sent your resume to hundreds of job applications without obtaining just results?

In this article I want to share with you the process that I have usually followed to get work ,
and that has given me very good results, and quite possibly is radically different from the process you are following.
The method is really simple, it is divided into 7 steps and once estructures and repeat it every day, you will be able to multiply your results, and find work more easily.

HOW TO GET A JOB IN 7 STEPS
Here are the step by step guide for you to get a job, and belive me if you follow below step carefully then you must got a job. But must read full article carefully.

⏩ 1. ANALYSE YOUR STRENGTHS AND WEAKNESSES
The first point is to be aware of your strengths and weaknesses. Normally one of the typical questions in a job interview is usually "tell me your strengths and weaknesses , " and if you have not thought about them previously, you can see committed and jeopardize the interview.

On the other hand when developing your resume , strengths and weaknesses play a fundamental role, since it is the first thing you will find the person who looks at your resume, and decide if you get a chance or not, depending on East.

You'll think about your strengths
TRAINING
What stands out most, or what is missing in your training, facing the job you want to choose?
If you have an MBA, a BA courses abroad, are data put on your resume, and bring forth to light in the interview. Conversely, if you are not training you should focus your resume and interview in other ways.
WORK EXPERIENCE
How much have relevant experience for the position job?

Do not think only in terms of technical requirements (such as programming or management of Microsoft Office), also thinks in terms of skills (project management, leadership, work under pressure), notes the most important aspects, as you will use in the next step.

SPECIFIC KNOWLEDGE TO THE JOB
What kind of work you have a good profile?

This is the technical part of your strengths and weaknesses, here you have to quantify as possible. For example level of English, certifications tools or achievements in your previous works are aspects that make you more attractive to businesses.

MOTIVATION
What type of work motivates you, and what kind of work does not motivate you?

This is key to addressing the successful job interview. Detect those aspects that you liked most of your training or previous work, and who are those who seek in your new jobs, because surely these works are closely related to your strengths and your weaknesses related bit.

You can read my article on how to find a job you are passionate to know what kind of work it may be appropriate for you.

Once you do this, make a list of possible job positions you think you'd get taking into account all these aspects.

⏩ 2. MAKE A GOOD RESUME
With the information you have collected in the previous step in terms of training and experience you'll develop a curriculum that is able to show the company that you are able to solve their problems and needs, emphasizing your strengths and minimizing your potential weaknesses ( as it could be lack of training, experience, etc).

⏩ 3. TAKE ADVANTAGE OF SOCIAL NETWORKS AND JOB PORTALS
The job search is to be carried out increasingly through social networks such as Linkedin or Xing.
Social networks offer many more possibilities than traditional job portals when getting work.
While in employment portals adopt a passive attitude, sending your resume and waiting to be contacted in social networks adopt an active attitude make it known as a professional.

Specifically, Linkedin offers the possibility of :
Contact with other professionals in order to increase your chances of getting a job.
Follow companies to find out when they post jobs and submit your resume.

On Twitter you can:
Following companies to be aware of what publish jobs.
Use #hashtags for possible jobs (#SoftwareJobs, #ITJobs, #ManagerJob). These allow you to save hours hashtag search job portals.
Follow other job.

⏩ 4. MAKE A LIST OF COMPANIES WHERE YOU WOULD LIKE TO WORK
The next step once you've built your resume, and you open your profile on social networks is to find companies that could offer you the job of your dreams . To do this you go to:

Find information about companies that have announced job offers that you might find useful.
Search for potential contacts within those companies on social networks (Linkedin).
Connect with them and send them a letter in order to have any chance of getting a job interview.
If you send one mail a day to a different person, at the end of month you will have 30 new contacts and 30 potential employment opportunities. It all adds up.

⏩ 5. MAKE "NETWORKING"
This is something that many people overlook.
Did you know that large companies select only 1% of its employees through job portals?

By this I do not mean that you will not get a job, I'm just saying you have to send at least 100 curriculums to have any chance. How many hours will miss doing this?

Instead of sending your resume through job portals, focus on contact with people who can give you a job . Focus on building relationships through social networks like Linkedin or events in your sector.

If you make 1 new contact every day, at the end of the year you can have 365 new employment opportunities.

⏩  6. BUILD YOUR PERSONAL BRAND
Personal branding is a booming trend, and gives you value as much higher than can you provide any professional resume.
Among the advantages of having a personal brand we can include:

- Position yourself as an expert in a niche market.

- Send a single message, you apart from other candidates for the post of employment.

- To become the demand rather supply. You're not a candidate's job seeker, you are a leader who look for companies.

You can start building your personal brand on social networks like Linkedin and Twitter, and you can go beyond opening a blog or editing videos on youtube related to the topic in which you work.

⏩ 7. PERFECTS THE JOB SEARCH PROCESS
The last step is to integrate all the above steps in a systematic job search and contact creation process. You must perfect each step of the process :

Differentiate your resume : build a curriculum that differentiates you from other candidates
Learning to use social networks and job search portals: perfects the art of looking for work on the Internet, use your existing network of contacts, expand it and generate new employment opportunities.
Build personal brand : go beyond a simple resume, show your knowledge and position yourself as a leader increases your chances of getting a job.
Hone your strategy and tactics in job interviews : Learn to prepare, respond and systematize the whole process of selection of any company.
The key to all this is: "practice, practice, practice"

I hope the article has been useful, Thanks for reading.
Best of luck

Wednesday, 30 November 2016

Wednesday, 23 November 2016

What are the Goals of CRM Security Model?

Microsoft Dynamics 365 and Microsoft Dynamics 365 (online) provide a security model that protects data integrity and privacy, and supports efficient data access and collaboration. The goals of the model are as follows:

  • Provide users with the access only to the appropriate levels of information that is required to do their jobs.
  • Categorize users by role and restrict access based on those roles.
  • Support data sharing so that users and teams can be granted access to records that they do not own for a specified collaborative effort.
  • Prevent a user's access to records the user does not own or share.

Difference between Role based Record based and Field Level Security in Dynamics CRM


  • Role-based security: Role-based security in Microsoft Dynamics 365 focuses on grouping a set of privileges together that describe the responsibilities (or tasks that can be performed) for a user. Microsoft Dynamics 365 includes a set of predefined security roles. Each aggregates a set of user rights to make user security management easier. Also, each application deployment can define its own roles to meet the needs of different users.
  • Record-based security: Record-based security in Microsoft Dynamics 365 focuses on access rights to specific records.
  • Field-level security: Field-level security in Microsoft Dynamics 365 restricts access to specific high business impact fields in an entity only to specified users or teams.

Combine role-based security, record-level security, and field-level security to define the overall security rights that users have within your custom Microsoft Dynamics 365 application.

Field Level Security in Microsoft Dynamics CRM

In Microsoft Dynamics 365 and Microsoft Dynamics 365 (online), you use field-level security to restrict access to high business impact fields to specific users and teams. For example, you use this to enable only certain users to read or update the credit score for a customer. For this release, field-level security can be applied to both custom fields and many out-of-box fields.

The following steps describe how to restrict access to a field:

  1. Enable field-level security for an attribute
  2. Create a field-level security profile
  3. Associate users or teams with the profile
  4. Add specific field permissions, such as Create, Update or Read for a specific attribute to the profile

What is MS CRM? MS Dynamics CRM Interview Question

CRM stands for customer relationship management. CRM is a category of integrated, data-driven solutions that improve how you interact and do business with your customers. CRM systems and applications are designed to manage and maintain customer relationships, track engagements and                               sales, and deliver actionable data all in one place.

Monday, 21 November 2016

Thursday, 17 November 2016

Difference between Session and Connection in SQL Server?


  • Sessions – when the client application connects to SQL Server the two sides establish a “session” on which to exchange information. Strictly speaking a session is not the same as the underlying physical connection, it is a SQL Server logical representation of a connection. But for practical purposes, you can think of this as being a connection (session =~ connection). See sys.dm_exec_sessions. This is the old SPID that existed in SQL Server 2000 and earlier. You may sometimes notice a single session repeating multiple times in a DMV output. This happens because of parallel queries. A parallel query uses the same session to communicate with the client, but on the SQL Server side multiple worker (threads) are assigned to service this request. So if you see multiple rows with the same session ID, know that the query request is being serviced by multiple threads.


  • Connections – this is the actual physical connection established at the lower protocol level with all of its characteristics sys.dm_exec_connections . There is a 1:1 mapping between a Session and a Connection.

What is a Data Page in SQL Server?

A page is the most basic unit of storage in SQL Server. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.

What is Fill Factor in SQL Server? SQL Server Interview Question

In SQL Server a page(8KB size) is the basic unit of data storage in SQL server. Data is stored in the leaf-level pages of Index.  The percentage of space to be filled with data in a leaf level page is decided by fill factor. The remaining space left is used for future growth of data in the page.
Fill factor is a number from 1 to 100. Its default value is 0, which is same as 100. So when we say fill factor is 80 means, 80% of space is filled with data and remaining 20% is vacant for future use. So higher the fill factor, more data is stored in the page. Fill factor setting is applied when we create/rebuild index.
To check it right click on your server instance and click property the below window will appear.


Referance: codeproject

Tuesday, 15 November 2016

How to access/ping a server located on AWS?

Using UI:
In your security group:

    • Click the inbound tab
    • Create a custom ICMP rule
    • Select echo request
    • Use range 0.0.0.0/0 for everyone or lock it down to specific IPs
    • Apply the changes
    • and you'll be able to ping.
Using cmd: To do this on the command line you can run:
    • ec2-authorize <group> -P icmp -t -1:-1 -s 0.0.0.0/0

How to delete files recursively from an S3 bucket?

aws s3 rm --recursive s3://your_bucket_name/foo/

Or delete everything under the bucket:
aws s3 rm --recursive s3://your_bucket_name

If what you want is to actually delete the bucket, there is one-step shortcut:
aws s3 rb --force s3://your_bucket_name

What is the difference between Amazon SNS and Amazon SQS?

  • Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
  • Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components—without requiring each component to be concurrently available.

Explain what happens when I reboot an EC2 instance?

Rebooting an instance is like rebooting a PC. The hard disk isn't affected. You don't return to the image's original state, but the contents of the hard disks are those before the reboot.
Rebooting isn't associated with billing. Billing starts when you instantiate an image and stops when you terminate it. Rebooting in between hasn't any effect.

How step you follow to make 10,000 files as public in S3?

I will generate a bucket policy which gives access to all the files in the bucket. The bucket policy can be added to a bucket through AWS console.
{
    "Id": "...",
    "Statement": [ {
        "Sid": "...",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucket/*",
        "Principal": {
            "AWS": [ "*" ]
        }
    } ]
}

Is it possible to use AWS as a web host? What are the way of using AWS as a web host?

Yes it is completely possible to host websites on AWS in 2 ways:
  1.  Easy - S3 (Simple Storage Solution) is a bucket storage solution that lets you serve static content e.g. images but has recently been upgraded so you can use it to host flat .html files and your site will get served by a default Apache installation with very little configuration on your part (but also little control).
  2. Trickier - You can use EC2 (Elastic Compute Cloud) and create a virtual Linux instance then install Apache/NGinx (or whatever) on that to give you complete control over serving whatever/however you want. You use Security Groups to enable/disable ports for individual machines or groups of them.

How you will find out the instance id from within an ec2 machine?

wget -q -O - http://instance-data/latest/meta-data/instance-id

If you need programatic access to the instance ID from within a script
die() { status=$1; shift; echo "FATAL: $*"; exit $status; }
EC2_INSTANCE_ID="`wget -q -O - http://instance-data/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"

What are the benefits of EBS vs. instance-store?

  • EBS backed instances can be set so that they cannot be (accidentally) terminated through the API.
  • EBS backed instances can be stopped when you're not using them and resumed when you need them again (like pausing a Virtual PC), at least with my usage patterns saving much more money than I spend on a few dozen GB of EBS storage.
  • EBS backed instances don't lose their instance storage when they crash (not a requirement for all users, but makes recovery much faster)
  • You can dynamically resize EBS instance storage.
  • You can transfer the EBS instance storage to a brand new instance (useful if the hardware at Amazon you were running on gets flaky or dies, which does happen from time to time)
  • It is faster to launch an EBS backed instance because the image does not have to be fetched from S3.

Which AWS responsible for managed email and calendaring?

WorkMail is a managed email and calendaring service with strong security controls and support for existing desktop and mobile email clients. You can access their email, contacts, and calendars wherever you use Microsoft Outlook, your browser, or your iOS and Android mobile devices. You can integrate Amazon WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location where your data is stored.

What is Amazon AppStream and advantage of using AppStreaming?

Amazon AppStream is an application streaming service that lets you stream your existing resource-intensive applications from the cloud without code modifications.

Advantages of Streaming Your Application
Interactively streaming your application from the cloud provides several benefits:
  • Remove Device Constraints – You can leverage the compute power of AWS to deliver experiences that wouldn't normally be possible due to the GPU, CPU, memory or physical storage constraints of local devices.
  • Support Multiple Platforms – You can write your application once and stream it to multiple device platforms. To support a new device, just write a small client to connect to your streaming application.
  • Fast and Easy Updates – Because your streaming application is centrally managed by Amazon AppStream, updating your application is as simple as providing a new version of your streaming application to Amazon AppStream. You can immediately upgrade all of your customers without any action on their part.
  • Instant On – Streaming your application with Amazon AppStream lets your customers start using your application or game immediately, without the delays associated with large file downloads and time-consuming installations.
  • Improve Security – Unlike traditional boxed software and digital downloads, where your application is available for theft or reverse engineering, Amazon AppStream stores your streaming application binary securely in AWS datacenters.
  • Automatic Scaling – You can use Amazon AppStream to specify capacity needs, and then the service automatically scales your streamed application and connects customers’ devices to it.

Explain what is Regions and Endpoints in AWS?

To reduce data latency in your applications, most Amazon Web Services products allow you to select a regional endpoint to make your requests. An endpoint is a URL that is the entry point for a web service. For example, https://dynamodb.us-west-2.amazonaws.com is an entry point for the Amazon DynamoDB service.
Some services, such as IAM, do not support regions; their endpoints therefore do not include a region. A few services, such as Amazon EC2, let you specify an endpoint that does not include a specific region, for example, https://ec2.amazonaws.com. In that case, AWS routes the endpoint to us-east-1.

What Is Amazon CloudSearch and its features?

Amazon CloudSearch is a fully managed service in the cloud that makes it easy to set up, manage, and scale a search solution for your website or application.
You can use Amazon CloudSearch to index and search both structured data and plain text. Amazon CloudSearch features:
  • Full text search with language-specific text processing
  • Boolean search
  • Prefix searches
  • Range searches
  • Term boosting
  • Faceting
  • Highlighting
  • Autocomplete Suggestions

What is Amazon Kinesis Firehose?

Amazon Kinesis Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift.

What is AWS Data Pipeline? and what are the components of AWS Data Pipeline?

AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.

The following components of AWS Data Pipeline work together to manage your data:

  • A pipeline definition specifies the business logic of your data management. For more information, see Pipeline Definition File Syntax.
  • A pipeline schedules and runs tasks. You upload your pipeline definition to the pipeline, and then activate the pipeline. You can edit the pipeline definition for a running pipeline and activate the pipeline again for it to take effect. You can deactivate the pipeline, modify a data source, and then activate the pipeline again. When you are finished with your pipeline, you can delete it.
  • Task Runner polls for tasks and then performs those tasks. For example, Task Runner could copy log files to Amazon S3 and launch Amazon EMR clusters. Task Runner is installed and runs automatically on resources created by your pipeline definitions. You can write a custom task runner application, or you can use the Task Runner application that is provided by AWS Data Pipeline. For more information, see Task Runners.

What is Amazon EMR?

Amazon Elastic MapReduce (Amazon EMR) is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

What is AWS WAF? What are the potential benefits of using WAF?

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront and lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, CloudFront responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked.
Benefits of using WAF:
  • Additional protection against web attacks using conditions that you specify. You can define conditions by using characteristics of web requests such as the IP address that the requests originate from, the values in headers, strings that appear in the requests, and the presence of malicious SQL code in the request, which is known as SQL injection.
  • Rules that you can reuse for multiple web applications
  • Real-time metrics and sampled web requests
  • Automated administration using the AWS WAF API

What is the AWS Key Management Service?

The AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.

Explain what is ElastiCache?

ElastiCache is a web service that makes it easy to set up, manage, and scale distributed in-memory cache environments in the cloud.

Explain what is DynamoDB in AWS?

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance.

Explain how the buffer is used in Amazon web services?

The buffer is used to make the system more robust to manage traffic or load by synchronizing different component.  Usually, components receive and process the requests in an unbalanced way, With the help of buffer, the components will be balanced and will work at the same speed to provide faster services.

Explain what is C4 instances?

C4 instances are ideal for compute-bound applications that benefit from high performance processors.

Explain what is T2 instances?

T2 instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance as required by your workload.

Mention what are the differences between Amazon S3 and EC2 ?

S3: Amazon S3 is just a storage service, typically used to store large binary files. Amazon also has other storage and database services, like RDS for relational databases and DynamoDB for NoSQL.

EC2: An EC2 instance is like a remote computer running Windows or Linux and on which you can install whatever software you want, including a Web server running PHP code and a database server.

Explain some features of Amazon EC2?

Amazon EC2 provides the following features:
  • Virtual computing environments, known as instances
  • Preconfigured templates for your instances, known as Amazon Machine Images (AMIs), that package the bits you need for your server (including the operating system and additional software)
  • Various configurations of CPU, memory, storage, and networking capacity for your instances, known as instance types
  • Secure login information for your instances using key pairs (AWS stores the public key, and you store the private key in a secure place)
  • Storage volumes for temporary data that's deleted when you stop or terminate your instance, known as instance store volumes
  • Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known as Amazon EBS volumes
  • Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known as regions and Availability Zones
  • A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups
  • Static IP addresses for dynamic cloud computing, known as Elastic IP addresses

Explain what Is Amazon EC2 instance?

An EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure.

What Is Amazon EC2?

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

Explain what is Redshift?

Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools.

Mention what is the relation between an instance and AMI?

From a single AMI, you can launch multiple types of instances.  An instance type defines the hardware of the host computer used for your instance. Each instance type provides different compute and memory capabilities.  Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

Explain what is AMI ( Amazon Machine Image )?

It’s a template that provides the information (an operating system, an application server and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud.  You can launch instances from as many different AMIs as you need.

Explain what is S3 in AWS?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any amount of data, at any time and from anywhere on the web.  Also we can host a website in Amazon S3. most of the companies storing the documents, images and other files to S3. For S3, the payment model is “pay as you go”.

What is AWS Certificate Manager?

AWS Certificate Manager (ACM) handles the complexity of provisioning, deploying, and managing certificates provided by ACM (ACM Certificates) for your AWS-based websites and applications. You use ACM to request and manage the certificate and then use other AWS services to provision the ACM Certificate for your website or application. As shown by the following illustration, ACM Certificates are currently available for use with only Elastic Load Balancing and Amazon CloudFront. You cannot use ACM Certificates outside of AWS.

Explain what is IAM service?

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization). AWS INTERVIEW QUESTIONS 2016

Explain what are the key components of AWS( Amazon Web Service )?

The key components of AWS are :- (AWS 2016 INTERVIEW QUESTIONS)
  • Route 53: A DNS web service
  • Simple E-mail Service: It allows sending e-mail using RESTFUL API call or via regular SMTP
  • Identity and Access Management: It provides enhanced security and identity management for your AWS account
  • Simple Storage Device or (S3): It is a storage device and the most widely used AWS service
  • Elastic Compute Cloud (EC2): It provides on-demand computing resources for hosting applications. It is very useful in case of unpredictable workloads
  • Elastic Block Store (EBS): It provides persistent storage volumes that attach to EC2 to allow you to persist data past the lifespan of a single EC2
  • CloudWatch: To monitor AWS resources, It allows administrators to view and collect key Also, one can set a notification alarm in case of trouble.

Explain what is AWS(Amazon Web Service)?

AWS stands for Amazon Web Service; it is a collection of remote computing services also known as cloud computing platform.  This new realm of cloud computing is also known as IaaS or Infrastructure as a Service.

Friday, 4 November 2016

Top 80 AWS INTERVIEW QUESTIONS ANSWERS SET-2

Thursday, 3 November 2016

What is Amazon RDS? AWS INTERVIEW QUESTION

RDS stand for Relational Database Service is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resize-able capacity for an industry-standard relational database and manages common database administration tasks.

Note : This questions asked during TCS Interview

Wednesday, 2 November 2016

Samsung AWS Interview Question

Suppose that you are working with a customer who has 10 TB of archival data that they want to migrate to Glacier. The customer has a 1-Mbps connection to the internet. Which service or feature provides the fastest method of getting data into Amazon Glacier?
Ans: AWS Import/Export

Note : This questions asked during Samsung Interview

What is MFA in AWS? AWS Interview questions

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

Note : This questions asked during Aon Hewitt Interview

What is Amazon VPC?

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

Why we use VPC in AWS? AWS INTERVIEW

Normally, each EC2 instance you launch is randomly assigned a public IP address in the amazon EC2 address space. VPC allows you to create an isolated portion of the AWS cloud and launch EC2 instances that have private address in the range of your choice (10.0.0.0, for instance)

Note : Questions asked during Adobe Systems Interview

Can you descrive the steps of create default VPC in AWS? AWS Interview Question

When we create a default VPC, we do the following to set it up for you: (

  1. Create a default subnet in each Availability Zone.
  2. Create an Internet gateway and connect it to your default VPC.
  3. Create a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway.
  4. Create a default security group and associate it with your default VPC.
  5. Create a default network access control list (ACL) and associate it with your default VPC.
  6. Associate the default DHCP options set for your AWS account with your default VPC.
  7. The following figure illustrates the key components that we set up for a default VPC.
Note : Question asked in TCS

What are the three features provided by Amazon that you can use to increase and monitor the security? AWS Interview

Amazon VPC provides three features that you can use to increase and monitor the security for your VPC:

  • Security groups — Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level
  • Network access control lists (ACLs) — Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level
  • Flow logs — Capture information about the IP traffic going to and from network interfaces in your VPC

What is the difference between Network ACLs and Security Groups in AWS? AWS Interivew Question

What is the difference between Network ACLs and Security Groups in AWS? Amazon Web Services Interview Question

  • Network ACLs: A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For more information about the differences between security groups and network ACLs, see Comparison of Security Groups and Network ACLs.
  • Security Groups: A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign the instance to up to five security groups. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.
The following table summarizes the basic differences between network ACLs and security groups.
Network ACLSecurity Group
Operates at the subnet level (second layer of defense) Operates at the instance level (first layer of defense)
Supports allow rules and deny rules Supports allow rules only
Is stateless: Return traffic must be explicitly allowed by rules Is stateful: Return traffic is automatically allowed, regardless of any rules
We process rules in number order when deciding whether to allow traffic We evaluate all rules before deciding whether to allow traffic
Automatically applies to all instances in the subnets it's associated with (backup layer of defense, so you don't have to rely on someone specifying the security group) Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on

Tuesday, 1 November 2016

Top 20 Telerik Test Studio Interview Questions Answers PDF

Here are the Top 20 Telerik Test Studio Interview Question with Answers, If you looking for job change and preparing for interview then this post is for you. Must go through all questions.

Describe Automated Testing?
Automated Testing/(test automation) is the use of special software(Testing tools) to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

Describe Telerik Test Studio?
Telerik Test Studio is a Windows-based software testing tool for web and desktop functional testing, software performance testing, load testing and mobile application testing developed by Telerik.

What is Functional testing?
Functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (unlike white-box testing). Functional testing usually describes what the system does.

What do you understand by coded step in Telerik Test Studio?
Test Studio supports coded steps. This allows you to write code and have it executed as a test step. Use a coded step for a scenario that requires more complexity than what can be composed with the Verification Builder or by actions from the Elements Menu.

How create code behind file in Telerik Test Studio?
There are two methods of creating a code behind file for your test.
1.  Add a coded step from the Step Builder.
2.  Right click on a step and select Edit in Code from the Test Step Context Menu.


Tell me the steps of add an Assembly Reference (Standalone version)?
1. Open or create a test project in the Standalone version. Click the Project tab the Show button in the Settings ribbon.
2. The Project Settings menu loads.
3. Click Script.
4. This lists the Project References.
5. Click Add Reference to browse for an assembly in DLL form.
6. Locate the assembly and click Open. The new DLL should appear in your Project References list. execute your test. Test Studio will build the coded step(s) and alert you to any compilation errors.

Can you do  automate testing of a PDF file using Test Studio?
Unfortunately Test Studio is unable to connect to and parse browser windows that open a PDF file. PDF files do not contain a Document Object Model, they do not contain HTML. Test Studio can only connect to HTML, Silverlight and WPF type of windows.

How you will run your tests in different browser versions?
To run your tests in multiple versions of the same browser requires you to setup multiple machines. Test Studio can only use the version of the browser that is currently installed on one machine. If you want to do something like run your tests in IE 8 & IE 9 & IE 10 requires you to setup 3 different machines (which can be VM's) each with a different version of IE installed on it.

How you can create a performance test with N concurrent users?
A Test Studio Performance test will only do one user at a time. It is not possible create this type of test with more than one user. To stress your web server with multiple users you must create and execute a Test Studio Load test.

What are the .Net Framework based languages supported by Telerik Test Studio?
Test Studio supports coding in C# and VB.NET. By using them one can easily leverage the capabilities of the .NET framework and Telerik Testing Framework which is in the base of Test Studio.

Can we detect JavaScript errors using Test Studio?
Test Studio cannot interact with the browser console and it cannot detect and report on JavaScript errors. Test Studio can only detect JavaScript popups.

What are the advantage and disadvantage of using Test Studio?

What's the diff between Exact and same compare types?
The difference between Exact and Same is that Exact is case sensitive and Same is not.

How you will select checkboxes randomly if you have checkbox ID?
You can search for an HTML input checkbox and cast it properly like this:
HtmlInputCheckBox chkBox = ActiveBrowser.Find.ById<HtmlInputCheckBox>("my random id here");

What is xUnit.net in Telerik Test Studio?
xUnit.net is an open source unit testing tool for the .NET framework, written by the original author of NUnit. Telerik Testing Framework comes with built-in support for xUnit.net 1.8 and higher. 

What is the latest version of Telerik Test Studio?
Test Studio R3 2016

What are the new features of Test Studio R3 2016?

What are the Key Steps to Prepare and Execute the Testing of a Project? 

Name all options available under Record dropdown in Test Studio?

What are the Key Steps to Prepare and Execute the Testing of a Project? Testing Interview Question

Below are the Key Steps to Prepare and Execute the Testing of a Project

  1. Get to know the domain expert and user community. Fundamentally understand the business goals of the application.
  2. Break down user stories into prioritized testing needs and track those needs until completion. Use automated systems to capture user stories, distill requirements and trace requirements through to implementation, and back to user stories.
  3. Translate testing needs into test cases as early as possible. Work with users and business analysts to ensure the test cases reflect real business needs. Work with developers to devise technology-facing tests, such as integration and unit tests. Use automation tools such as Telerik’s TeamPulse and Telerik Test Studio suite to enhance communication between developers, testers and users.
  4. Automate test cases and test execution using automated testing tool so tests can be rerun automatically, ideally as part of the build process.
  5. Track test case execution to ensure the fitness of the application. Be ready to report on test execution at any time, so decisions can be made on the deployment side.
  6. Trace requirements and user stories from inception to delivery to ensure business needs have been adequately addressed.

Telerik Test Studio R3 2016 new features? Interview Question

The third major Test Studio R3 2016 is live now. Here are the new features of R3 2016 include support for Angular, iOS 10, our very own NativeScript as well as Android hybrid apps.
Below are the major features Telerik Test Studio R3 come with.


  • Support for Angular:
    If you are using the Angular framework (Angular 1.x as well as Angular 2) to build web apps now you can leverage Test Studio to easily test your applications.
  • Support for hybrid mobile apps:
    Now you can connect your Android hybrid apps to Test Studio and record/execute actions and verifications against them. All the features that we have for native app testing will be available for hybrid as well - elements, DOM explorer, test lists, results, etc.
  • API Testing Adds Support for Fiddler:
    As promised with the initial launch of API Testing back in June, Beta 2 adds support for Fiddler. Test Studio Ultimate and API Testing users can export Fiddler recorded traffic into a .saz file and easily upload it to Test Studio. You can now create a Test Studio API test from these Fiddler traffic logs or plug them into an existing API test.
  • Support for NativeScript:
    Test Studio Mobile users can now easily instrument their NativeScript apps to make them testable with Test Studio. Leverage a specifically built for the purpose plugin to extend your app with a few quick commands. See more on NativeScript support.
  • Recorder Gets a Better Startup Page:
    We are replacing the recorder startup page with launch & navigate dialog inside Test Studio. This will enable users to take advantage of a couple of new features: auto-complete and history of previous recording sessions.
  • Enhanced Mobile Recording:
    The Test Studio DOM explorer now reveals the app element attributes during mobile web test recording.


Wednesday, 26 October 2016

Telerik Test Studio Advantages and Disadvantages / limitations

Here we come with Advantages/benefits of Telerik Test Studio, This is becoming very popular testing tool among tester world. So If you are preparing for testing interview then must go through this article.

Advantages of Telerik Test Studio:

  • Telerik Test Studio is very user friendly and easy to learn.
  • Good language support, Test Studio doesn’t require you to write code in a lot of scenarios. However, if you do need to it supports C# and VB.NET.
  • Team Collaboration, Testers can design and maintain tests and pass them to developers through source control to assist with more complex, edge-case scenarios.
  • Test Studio comes with rich support for data-driven testing. All recorded test steps have data-related properties that allow you to bind them to a data source. Test Studio supports various data sources: Excel, CSV, XML, and Database. In addition, it has a built-in data grid that allows you to quickly create your own data source right inside your test without having to revert to external sources.
  • Extensive HTML and Silverlight control Suite, Besides native support for Telerik controls, Test Studio software testing solution also includes an extensive suite of HTML and Silverlight control translators which abstract out the control specifics. Thanks to these translators, testers can build automated tests for complex control-based applications quickly and easily.
  • Custom controls support, Developers sometimes extend the components they are using to develop their applications. Test Studio automatically detects the base class that the control inherits and automatically suggests verifications for that base control – quick tasks, action handling, mouse actions, and more. 
  • Native Support for Telerik RadControls, As you know Telerik RadControls are very famous so If your applications are built with Telerik AJAX, Silverlight or WPF controls, Test Studio will automatically detect them and provide tailored verification which make it possible to test even complex controls like hierarchical grid, scheduler, etc.
  • JavaScript and JSON support, Test Studio supports JavaScript function invocation and validation directly from your code. The testing tool also understands JSON objects, can handle strongly typed objects returned from JavaScript, as well as access to JQUERY API’s.
  • You can run automated tests on real devices as well as emulators without writing a single line of code.

Disadvantages of Telerik Test Studio:
  • Test Studio is standalone and if you need to use VS plugin you need an extra VS professional or higher license.
  • You can't use elements of one project to another, so you have to create only 1 project and with due course of time it goes heavy. But this depends upon your application size. You can copy paste the content from one project to other as a work around.
  • You can convert all your steps to code, but can't revert them back.
  • Issue with the usability of the "If-else" statement, as for using the If-else condition, your element in "If" condition must be present if not, whole test case fails.
  • It doesn't support Android app testing and Desktop application testing (in desktop only WPF is supported).
  • For customized reports, if required you need to write code.
  • If the DOM of your application is heavy then Test Studio will create lots of performance issues while recording, like Test Studio and Application gets hang. For this you need to use trial version first.
  • Test cases where you are using a test case as a child of another, there you will find that you are not getting the desired behavior.
  • It's not a free tool and costly too.
  • Need powerful computer to run all capabilities
  • Quite a lot of customization options available but time consuming to set up.

Tuesday, 25 October 2016

What is dirty read in SQL Server? Accenture SQL Interview

In simple word you can define dirty read like, reading any value from any other transaction which is still not committed.
Example. Suppose there are two Transaction T1,T2,
T1 writing any variable value in DB and still not committed.
Same time if T2 try to Read the value before T1 committing,then its call Dirty Read, Because there are chance that T1 may Roll Back,but T2 using the Value which T1 wrote before Rollback.

Note: Questions asked during Accenture SQL Interview

Monday, 24 October 2016

What is collation? and What are different types of Collation Sensitivity in SQL Server?

What is collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
  • Case sensitivity: If A and a, C and c, etc. are treated in the same way then it is case-insensitive. A computer treats A and a differently because it uses ASCII code to differentiate the input. The ASCII value of A is 65, while a is 97. The ASCII value of C is 67 and c is 99.
  • Accent sensitivity: If e and é, o and ó are treated in the same way, then it is accent-insensitive. A computer treats a and á differently because it uses ASCII code for differentiating the input. The ASCII value of e is 101 and é is 130. The ASCII value of o is 111 and ó is 243.
  • Kana Sensitivity: When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.
  • Width sensitivity: When a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.

What is the use of NOLOCK in SQL Server? Accretive Health Interview Questions

NOLOCK ensures your SELECT statements are fast because it doesn't have to wait for existing locks or transactions to complete before returning the results of your query. The down side is that you can end up pulling back "dirty" data - things that might have been rolled back before being committed but after your SELECT statement was run.

Ex:- SELECT * FROM Emp WITH(NOLOCK)

Note: This questions asked during Accretive Health Interview