Cognisess aims to democratise human capital; empowering every individual and every organisation to understand and enhance their talent and potential.
To achieve that goal, we believe that ethical AI can be a powerful tool for creating objectivity in the hiring process.
Data Collection and Transparency
The journey to ethical AI begins with good data collection. We collect voluntarily provided data from users and give them access to change or delete their personal information as they choose. In many cases, it is possible for the user to provide data but remain anonymous, providing no identifiable information at all. As users provide personal details, we inform them of how we will be using their data. Once a user has completed assessments, we offer them the opportunity to view their results so they can better understand their abilities, potential, and skillsets.
Cognisess assessments will never take protected characteristics into account when evaluating job candidates and employees. These include age, disabilty, gender, marriage & civil partnerships, pregnancy & maternity, race, religion or belief, sex and sexual orientation. We might still ask to collect information about these sensitive characteristics but we do so in order to help identify and correct any bias in our models and it is completely optional. Those with disabilities (such as dyslexia, colorblindness, impaired hearing or vision and affected motor skills) can choose to disclose their status in order to receive accommodations to ensure equal opportunity during assessment. We will never disclose this information to employers without user consent.
Addressing Bias in Training Data
When building AI products, biased or unrepresentative training data can produce biased models. Hence, if we find bias in the input data for one of our models, we will do our best to address it and correct it before allowing the model to evaluate users. Before releasing a model, it is applied in a test environment. Here we evaluate the training and testing data to see how it would perform, and check whether there is bias for any characteristic that is recorded in the model.
Even small subsets of employees or candidates can use the Cognisess platform to evaluate their skills. In this instance, Cognisess would take their data and compare it to aggregate user data, without the need for a predictive model.
Building Ethical Models
When we build models to find the best candidates for a job or to build cohesive teams, we never use factors like age, race, gender, or disability to determine fitness for a role. In fact, the models we build do not have any way of accessing this information. Since our assessments have been carefully designed by psychologists and tested for adverse impact, the results we use in our models are not inferring these protected characteristics either. In some applications, there are well documented performance differences between certain sub-groups, and we address these both in assessment design and scoring, as well as in model building where possible. We further inspect the results of our benchmark models to ensure all candidates are getting a fair chance at the next stage of hiring.
Continue to Monitor Models for Bias
After an ethical model is built, it can still develop bias during its lifespan. We are committed to routinely checking our most frequently used models to be sure that they continue to produce fair results. Most of our clients also regularly review the models we build for them in order to ensure that they are fair and represent the needs of the business given the fast-changing nature of many jobs and organisations. This ensures that only current, job-relevant characteristics are being measured. Furthermore, as each new hiring campaign brings its own unique requirements, we welcome these challenges to ensure our platform works for all types of people, all around the globe.