+ Cognisess

Cognisess aims to democratise human capital; empowering every individual and every organisation to understand and enhance their talent and potential.

To achieve that goal, we believe that ethical AI can be a powerful tool for creating objectivity in the hiring process.

In order to identify what makes an individual suitable for a role, we first need to build models. These models are mathematical constructs which, in some cases, can self-adjust based on the data that they process. It is therefore essential to review how the models are being built and to have a solid understanding of what constitutes suitability. There are many factors that could be considered, ranging from skills and abilities to the way that a person is likely to integrate into and complement the team.

Data Collection and Transparency

The journey to ethical AI begins with good data collection.  We collect voluntarily provided data from users and give them access to change or delete their personal information as they choose. In many cases, it is possible for the user to provide data but remain anonymous, providing no identifiable information at all. As users provide personal details, we inform them of how we will be using their data. Once a user has completed assessments, we offer them the opportunity to view their results so they can better understand their abilities, potential, and skillsets.

It is important to note that the personal data provided to us is collected from either assessments and interactions with the platform, or, in the case of performance-relevant data, from employers. We do not search the Internet or derive information from any other service. We do not believe that such information should be used and we question the quality of such data in building predictive models. Further information on our data handling practices can be seen in our privacy policy.

Protected Characteristics

Cognisess assessments will never take protected characteristics into account when evaluating job candidates and employees. These include age, disabilty, gender, marriage & civil partnerships, pregnancy & maternity, race, religion or belief, sex and sexual orientation. We might still ask to collect information about these sensitive characteristics but we do so in order to help identify and correct any bias in our models and it is completely optional. Those with disabilities (such as dyslexia, colorblindness, impaired hearing or vision and affected motor skills) can choose to disclose their status in order to receive accommodations to ensure equal opportunity during assessment. We will never disclose this information to employers without user consent.
 

Addressing Bias in Training Data

When building AI products, biased or unrepresentative training data can produce biased models. Hence, if we find bias in the input data for one of our models, we will do our best to address it and correct it before allowing the model to evaluate users. Before releasing a model, it is applied in a test environment. Here we evaluate the training and testing data to see how it would perform, and check whether there is bias for any characteristic that is recorded in the model.

Even small subsets of employees or candidates can use the Cognisess platform to evaluate their skills. In this instance, Cognisess would take their data and compare it to aggregate user data, without the need for a predictive model.

Building Ethical Models

When we build models to find the best candidates for a job or to build cohesive teams, we never use factors like age, race, gender, or disability to determine fitness for a role. In fact, the models we build do not have any way of accessing this information. Since our assessments have been carefully designed by psychologists and tested for adverse impact, the results we use in our models are not inferring these protected characteristics either. In some applications, there are well documented performance differences between certain sub-groups, and we address these both in assessment design and scoring, as well as in model building where possible. We further inspect the results of our benchmark models to ensure all candidates are getting a fair chance at the next stage of hiring.

Continue to Monitor Models for Bias

After an ethical model is built, it can still develop bias during its lifespan. We are committed to routinely checking our most frequently used models to be sure that they continue to produce fair results. Most of our clients also regularly review the models we build for them in order to ensure that they are fair and represent the needs of the business given the fast-changing nature of many jobs and organisations. This ensures that only current, job-relevant characteristics are being measured. Furthermore, as each new hiring campaign brings its own unique requirements, we welcome these challenges to ensure our platform works for all types of people, all around the globe.

Legislation and Standards Followed

Cognisess is General Data Protection Regulation (GDPR) compliant in how we collect and handle data. For more information on our data handling and safety procedures, please read our privacy policy. We also follow the guidelines of many different employment protection laws including the Americans with Disabilities Act (1990), the Equality Act 2010 (UK), and the Employment Equality Framework Directive (EU, 2000). Wherever your business is located, we will do our best to ensure that all applicable data handling and employee protection regulations are followed.
 
Since the use of AI is still relatively unlegislated, Cognisess strives to set high ethical standards in our field. We therefore work with several of our clients on building their own internal Ethical AI frameworks to ensure that all connected data and systems follow the same strict guidelines in line with the responsibilities of Cognisess as a service provider and the client as an employer. As more legislation and regulation is produced, we reevaluate our processes to ensure we remain compliant and exceed set standards.