A quality employment assessment must include multiple safeguards designed to ensure data authenticity and system functionality

One winter morning I fulfilled one of my least favorite household duties by stepping outside in the early morning chill to set out the trash. The wind greeted my Saturday morning stubble with a familiar slap in the face. With plumes of white smoke billowing from my lungs with every breath, I quickly remembered why I had delegated this chore to my oldest son. One word came to me—“BRRRR!” It took all of three steps to realize that I lacked adequate protection from the elements. Without the proper layers of insulation, I was at the mercy of whatever this cold Saturday morning decided to throw at me.

Some employee selection systems can leave you feeling exposed. Just as your body requires more coverage in the winter, it is imperative that your assessment process is properly outfitted to meet the elements of the 21st century job market and offer the highest level of protection.

How can an assessment system protect your interests? Organizations need protection from the following elements:

  • Misrepresentations made by new job candidates
  • The hiring of high-risk candidates
  • Concerns over the legality of the overall hiring process

When an assessment system offers all of the features mentioned on these pages, the organization can be more confident in its hiring decisions and in the unassailable legality of the process.

This article describes assessment design elements—represented symbolically by articles of clothing—that human resource leaders should look for to ensure they are getting maximum protection from a pre-employment assessment system. Grab your mug of hot chocolate and a warm blanket, toss a log on the fire, and spend some time enhancing your wardrobe to include specific layers of technology that will shelter you from the elements as you leverage your selection process to hire top talent.

Boots: Profiles Provide Traction to the Selection Process

In an assessment system, the definition of a profile can be simply stated as any guideline that candidates are matched against to determine their suitability for the job. There are three approaches, or types, of profiles:

  • The-Higher-the-Better — this approach, though not technically a profile, assumes that more of a behavioral characteristic is always better. There are many drawbacks to this approach, but we will focus on the issue of obtaining accurate information from candidates. Simply put, if candidates know you are looking for more, they will tend to select responses that reflect more for each question. This approach does not provide you with the assurance of high quality responses from candidates.
  • Best Practice — the best practice profile relies on normative data (average across many companies) to create an optimum range for the dimension being measured. The down side is that it is a one-size-fits-all approach that does not capture the unique requirements of the position or the culture of your organization (see below).
  • Custom Ideal Profile — This type of profile reflects the behavioral makeup of the ideal candidate for your organization by first determining the optimum range for the dimension being measured through analysis of your incumbent employees (those already working in the target position) and then assigning a “weight,” or level of importance, to every behavioral dimension being measured.

Of these three profile types, the custom ideal profile is recommended to provide the most protection.

Think of a custom ideal profile as a pair of warm boots. Custom ideal profiles should be fundamental components in your overall selection system just as warm boots are a vital part of your wardrobe on a cold day. The protective qualities of custom ideal profiles stem from their use of actual data from incumbents in a specific position, company, and industry, as well as the weighted values for each dimension.

Like snowflakes, no two custom ideal profiles are exact duplicates. In fact, similar job titles in two different companies are most often very different behaviorally across a large variety of dimensions.

Allow me to illustrate this point using another winter activity. In the snowmobile sales industry, dealerships employ salespeople to guide prospective clients through the shopping and buying process. One dealership may place a high value on “number of units sold.” All of their focus, training, bonus structures, and incentive programs are geared toward selling a high volume of snowmobiles. Success in this type of sales position requires behavioral traits that drive rapid sales cycles from first contact to closing. Conversely, a dealership across town may place more emphasis on profit margin. Higher profits may be derived from selling models that are more expensive and adding multiple upgrades like a larger engine, more chrome, added accessories, special paint options, etc. This specific sales role requires a slower, more consultative sales approach. Successful salespeople would possess behavioral characteristics that encourage relationships, up-selling, and “quality over quantity.” Both are sales roles, and both are in the same industry, but the two positions call for very different types of people, and therefore very different custom ideal profiles.

Like a good pair of boots, you need a custom ideal profile to keep you on firm footing and to steer your selection process well clear of some common misconceptions.

Misconception #1: “I can find an answer key that will tell me the correct answers to this assessment.”

  • A valid assessment tool provides multiple-choice responses for several questions related to any one of dozens of dimensions. This results in a large number of ways to arrive at a value for a single dimension. Therefore, there are no right or wrong answers.
  • In an assessment system that uses custom ideal profiles, every candidate is matched against a unique profile. Custom ideal profiles are built on assessed incumbents and performance data in the position, so there is no way for a candidate to know how much or how little of one characteristic is important for the role.
  • When using custom ideal profiles, candidates have no knowledge of the on-the-job importance assigned to any given dimension. The importance of each dimension is crucial because to ensure a desirable score, you would have to align your responses perfectly to the dimensions with the highest priority. Interestingly, the actual importance of a dimension is often counterintuitive to a candidate’s logical assumption.
  • In a well-designed system, candidates can only speculate which questions relate to which behavioral dimension, and what the best answer might be. This difficulty increases exponentially when a custom ideal profile is built on a large number of dimensions.

Misconception #2: “I will have a friend with more experience take the assessment for me who will score ‘better.’”

  • A friend’s responses are no more “correct” or “incorrect” than the true candidate’s responses. Keep in mind that there should be no “right” or “wrong” answer. Everyone is a “fit” somewhere, and the custom ideal profile process is designed to match candidates that are the best fit for specific jobs. Having someone else take the assessment does not increase a candidate’s chances because the friend may not be a good fit to the role.
  • By having a friend take the assessment, the candidate is at a serious disadvantage during the interview process. A good assessment generates interview questions and discussion topics directly from the candidate’s responses in relation to the custom ideal profile. To see this charade through to the end, the candidate would have to force their friend to impersonate him in the face-to-face interview as well!

Pants: Technical Documentation and Predictive Studies Supply Strong Legs to Stand On

Documenting that your assessment system actually does what it is supposed to do is as important to your protection level as pants are to someone exposed to the cold. Established assessment vendors should provide support materials that prove the system is legally valid, that it guards against various forms of candidate misrepresentation, and that it has historically measured what it was tasked to measure. This technical documentation—historical and client-specific—makes up the two “legs” in our hypothetical pair of pants.

Our first “leg” is the assessment system’s technical manual. It should provide the following samples of technical documentation:

  • Instrument Validation, or a detailed historical overview of how the assessment system was developed
  • Criterion Validity in the form of concurrent validation studies (incumbents are assessed and performance data is collected at the same time, or concurrently)
  • Post-deployment performance and turnover studies, which should document a reduction in turnover, an increase in good performance, or both
  • A thorough discussion of Adverse Impact, which support the instrument’s fairness and objectivity across the entire audience of candidates, regardless of race, sex, or age

These technical studies document how the assessment system has accurately linked on-the-job performance with the primary characteristics of incumbents in the role. You should expect to see data compiled over many years and millions of candidates.

You might be asking this question: “The technical manual offers proof that the assessment has worked historically for other companies, but how do I collect that type of proof for my company?” That question brings us to the second “leg” of technical documentation: predictive studies within your organization.

The best way to ensure that an assessment system protects you functionally and legally is to allow the assessment vendor to use your data, your incumbents, and your new candidates to produce two types of predictive validation studies.

  • Concurrent validation studies occur after the creation of a custom ideal profile. The objective is to establish a relationship between the performance data of your incumbents and the assessment system results.
  • Post-deployment studies are conducted once an assessment has been in use for a sufficient time (generally one year to allow for an optimum rollout strategy and to collect quality data). The objective is to study the historical, predictive qualities of the assessment system in identifying higher performers and/or reducing turnover. Upon completion of the post-deployment study, adjustments can be made for future improvements, and observations of utility can be observed as well.

By analyzing the assessment system’s ability to identify strong candidates and high-risk candidates, you will gain documented proof of its effectiveness. In fact, in time you will collect a mountain of scientific evidence that proves you are protected by the system’s technical design and your custom ideal profiles. This data will equip you with the documentation to address any misconceptions related to the effectiveness of your assessment system.

Misconception #3: “High scores on assessments do not correlate to top performers.”

  • Before choosing an assessment system, review the technical manual to be sure there is plenty of data proving the assessment’s ability to protect the client, identify a better quality of hire, and maintain legality.
  • Require vendors to conduct concurrent validation studies as a standard part of your validation process. This means that each new custom ideal profile has “paperwork” that establishes the relationship between performance and the related assessment scores.
  • When time and hiring volume permit, conduct a post-deployment study to better understand the effectiveness of the assessment system in terms of employee selection, improved performance, and/or reduction in turnover.

Gloves: Fairness in One Hand, Objectivity in
the Other

As I watched speed skating during the Winter Olympics, I noticed all of the different styles of body suits. Each was designed to keep the participant warm while simultaneously reducing wind drag and giving the athlete a slight advantage over the competition. In a similar way, some job seekers look to gain an advantage over others. There is one big difference: the Olympic committee keeps athletes in check, but no committee is in place to ensure that job seekers are being honest and providing the most accurate information.

To make sure accurate information is gathered, companies go to great lengths to verify employment history, check references, and even review or authenticate certifications. Some executives may be concerned about maintaining fairness and reducing the chances for a candidate to provide inaccurate information to obtain employment (a practice referred to as “gaming” the system). We should examine two important questions regarding the issue of gaming: Does gaming actually help the candidate? How does the assessment system address the issue?

First and foremost, creating a custom ideal profile greatly enhances the security of an online assessment process. As discussed above, each custom ideal profile is based on the actual performance data of those who are currently in the role, and there are no two alike. This makes it virtually impossible for anyone to manipulate the system to get a preferred score. Additionally, custom ideal profiles standardize the process so everyone is compared evenly and fairly. All candidates are evaluated against a unique, position-specific profile using unbiased, scientific, and objective criteria.

Research exists that helps us better understand the reality of the candidate gaming issue. Some people may have the opinion that those who try to inflate their scores will actually get higher scores. However, the facts indicate the opposite. The research of Arthur et al. (2009) related to the use of profiles includes a study of over 300 job candidates who took a cognitive assessment when applying for a job.(1) The original assessment represented a high-stakes opportunity for this group of job hunters. Over a year later, this same group of 300 took the assessment again, but this time the employees knew the scores had no bearing on employment.

The results showed that the overall average score was higher on the post-employment tests than on the original pre-employment tests (d = .39). Practically, the research showed that the number of those possibly gaming the system was between 0% (at the lowest) to a potential maximum of 7.7%. The research went on to state that even this small number is most likely inflated because those labeled as gamers may have been unmotivated the second time around, therefore causing scores to naturally decrease.

Statistically, attempts at gaming will most likely occur in every organization. To keep that practice from becoming widespread and successfully inflating scores in your organization’s hiring process, your assessment should be working 24/7 to minimize the gamers’ impact.

Understanding the facts around your selection process will help protect your candidates from another inaccurate misconception.

Misconception #4: “Because people cheat, the assessment is not fair.”

  • A sound assessment system is legally a very fair and objective tool in the hiring process. Although attempts at cheating are always possible, a quality assessment is designed to protect the hiring organization, maintain hiring fairness and objectivity, and to minimize the impact of those who choose to try their luck gaming the system.

Now that you know that some gaming may be going on among those in the candidate pool, you should also know what a good assessment system does to minimize the gamers’ impact on your hiring decisions.

A Good Coat: Keeping Authenticity In and
Distortion Out

Winter fashion trends include many brand names of coats. We often find that quality is related to the brand name of the coat purchased. I once had a proud shopping moment when I purchased a brand-name coat at a deep discount. I excitedly took it home to show it off and quickly found my shopping prowess was greatly overrated. As I looked closer at the coat, I found the name of the well-known brand to be misspelled. To my surprise, I had not received a deep discount on a quality coat, but only a fair price on an inferior “knock-off” product that was not as warm.

When searching for the best job candidates, be sure that you are receiving the most accurate assessment data and not settling for inferior results. A high-quality assessment system should have protective safeguards in place to help identify anyone who is not providing the most accurate data.

A primary safeguard against gaming is actually built into the custom ideal profiles. Candidates attempting to game the system exaggerate their actual preferences, or present themselves to be someone they are not, which may be detrimental to the candidate. Custom ideal profiles are inherently unique and specifically designed to represent an intricate pattern of behaviors. Anyone who does not provide authentic responses actually decreases their likelihood of obtaining a preferred score. To study this concept, 100,000 job applicants in 39 unique companies across 102 different profiles were analyzed by PeopleAnswers. The data showed that those who misrepresented themselves were 4.5 times more likely to receive a “Not Recommended” rating.(2)

Having confidence in the information you collect from candidates gives you the peace of mind that you are seeing the real behaviors of that person. At this level of protection, you can deflate misconceptions related to candidates attempting to game their way through the assessment.

Misconception #5: “I can fake my way through the assessment by guessing at the answers the employer wants to see.”

  • Research shows that job hunters are more likely to find suitable positions by indicating, through honest responses to the assessment, who they are, not who they think the employer wants them to be.

A Warm Hat: Establishing a Controlled Assessment Experience

Your uncovered head is a primary culprit for losing body heat on a cold day. Likewise, the “head” of an assessment, or the initial entry point, is where you can set the stage to ensure the assessment collects the best data. Keep three initial points of entry in mind to maximize the protective features of a good assessment system:

  • The first page of instructions
  • A controlled-access format
  • A multiple-form design

The instructions on the opening page of any assessment should introduce the candidate to an efficient, no-nonsense employment tool that should be taken seriously to obtain the best results.

Instructions should also convey the seriousness of the assessment portion of the selection process. Assessments should indicate that falsification of any documentation requested by the employer most often leads to an immediate removal of the candidate from the selection process. Therefore, it is good practice to make sure the candidate understands the important nature of the assessment, especially when using an assessment tool that is designed to identify attempts to manipulate the data.

To better understand the importance of assessment instructions, consider a PeopleAnswers study of 55,303 candidates under six different experimental conditions where the instructions were altered in various ways. The objective was to study different methods for presenting warnings in hopes of reducing candidate misrepresentations (referred to as “distortion”). Findings showed a significant reduction in distortion after candidates were told that the assessment system had the ability to detect anomalies in the data and that such information would be provided to the company for which they were applying.(3) This illustrates how the initial assessment instructions play a major role in collecting high-quality candidate data.

Another control factor available with some assessments is the “one shot” approach. In other words, the candidate has one shot at the assessment, with no access to subsequent testing sessions. Would you allow a candidate to completely change their application information on a second attempt at obtaining employment? Think about the confusion it would cause if a candidate stated on their application that they had one year of sales experience. Six weeks later, he or she comes back claiming ten years of sales experience. Which answer is the truth?

Controlling entry into the assessment system can greatly improve the likelihood of a high-quality data collection session while providing another layer of protection. Candidates should be allowed one assessment session, and only under specific and documented instances should a candidate be allowed to retest. Logging in and out is acceptable for non-timed assessments, but once they have completed the assessment, the results should remain unchanged. Trial-and-error behavior only encourages candidates to guess, then re-guess, then guess again, hoping to eventually find the right combination of answers. The best bet for job candidates is to offer the most accurate version of their behavioral preferences the first time—especially since some assessments do not allow candidates to retest and try to generate different results.

Another method some assessments use to promote security and control the testing process is to utilize multiple forms. Some systems achieve this by creating many versions of the same assessment that provide statistically identical results. The advantage to this approach is that the various assessment forms are equivalent but not exactly alike. If one person took two forms of an assessment, the final scores would be the same, with the results extracted from answers to different questions along the way.

By attempting to leverage as many of these techniques as possible, you are able to respond with confidence to the following misconception.

Misconception #6: “I can game the system by repeating the test several times.”

  • A well-designed system does not allow you to re-enter and take the assessment repeatedly. Once the system identifies that you have taken the assessment on a prior occasion, it applies those results to the new job profile. This eliminates the opportunity for candidates to enter and re-enter the assessment environment.
  • Multiple forms randomly presented to each candidate inhibit an individual’s ability to obtain or create an answer key. Candidates receive one of many variations of the testing content.
  • A timed, cognitive assessment limits a person’s ability to cheat or gain an advantage.

Dress Your Assessment for Success

Whether your organization’s job postings attract 30 or 30,000 candidates, it pays to have a pre-employment assessment that extracts accurate data while protecting you from the elements. Do not be caught in a blizzard of job applications without a tool that legally and fairly protects you from distortion, cheating, or other methods of gaming the system.


The following section summarizes all the features to consider when determining if an assessment system is equipped to protect you from the elements.

Types of Profiles

  • “The-Higher-the-Better” Profiles
  • Best Practice Profiles
  • Custom Ideal Profiles

Historical Documentation: Technical Manual

  • Instrument Validation
  • Criterion Validity
  • Post-Deployment Studies
  • Adverse Impact Data

Client-Specific Documentation: Predictive Studies

  • Concurrent Validity
  • Post-Deployment Studies

Can Candidates Successfully “Game” the System?

Distortion Safeguards

  • Profile Type in Use
  • Documented Distortion Testing

Controlled Entry Points

  • Assessment Instructions
  • Unlimited or Restricted Candidate Access
  • Multiple Test Forms

Author's Bio: 

Jason Taylor is passionate about using sound science and scalable technology to design and create innovative and sophisticated tools that bring a fresh perspective to the selection and talent management field. Annually, the technology tools under Taylor’s direction match several million employees to employers while providing quantified results to board rooms across industries.

As chief science officer at PeopleAnswers, Taylor ensures that his talent assessment software stays ahead of the marketplace with cutting-edge capabilities.

A pioneer in human capital systems development, editors from several scientific research publications distinguish Taylor for his research on web-based selection systems which he has developed. His historical perspective, expertise and track record of delivering bottom-line results to companies of all sizes from early stage start-ups to Fortune 500 companies have established Taylor as a thought leader in behavioral-based technology tools.

Taylor often speaks on talent management and selection technology at conferences across many industries including human resources, retail, hotel and restaurant, real estate and industrial and organizational psychology.

Taylor is an active member of the American Psychological Association (APA) and the Society for Industrial and Organizational Psychology (SIOP). He earned his Ph.D. in leadership education and development from Texas A&M University.