It's been a while since this was posted. Hopefully the information in here is still useful to you (if it isn't please let me know!). If you want to get the new stuff as soon as it's out though, sign up to the mailing list below.Join the Mailing list
tldr; Re-training people to remove their biases is hard/impossible. Just don’t give people a chance to use their biases and you can sidestep the problem completely.
NOTE I use quotes around the word “problem” because the primary problem with diversity in tech is almost certainly different depending on who you ask. This is going to be a relatively short non-technical post, but hopefully a meaningful one, as I’ve been throwing these ideas around in my head for a while but finally want to write them on paper, and get some feedback.
Diversity in tech is a hot-button issue. I think it has a relatively easy fix. Here’s what we could do:
It’s very hard to discriminate based on your biases when you don’t know anything about the person on the other side of the interview. Creating an air-tight anonymized hiring process is somewhat difficult, but we have the technology, it’s within the realm of possibility. For me, this means:
Outside of maybe just the initial reach out by a recruiter (as they likely know the potential candidate’s personal details already), every call should have voices modulated and disguised.
Remove names, addresses, anything else that might identify a candidate from resumes. May be useful to assign candidates code names like “lucky the lion” or “zupa the zebra” just so they’re easier to discuss for reviewers of the interview.
Strictly prohibit employees from discussing candidates with those outside their interview group (the group of employees doing the interviews for that potential candidate.
Use anonymized in-person interviews. This is where things get a little ridiculous/difficult, but I think it’s possible and worth a try. Here is one way you could achieve an anonymized in-person interview.
It’s a little unclear what the best course of action is in the case the hiring process is de-anonymized at some point, but it’s probably pretty reasonable to just start the process all over again, with a completely different set of interviewers. Maybe it’s more reasonable to treat it like chain of evidence for police investigations? Who knows.
Maybe the steps above are not the best (for whatever that means to you) way to do this, but the goal is the same – It’s very hard to use your biases if you don’t have any data (in this case what the person looks or sounds like, or where they come from) to go on. Sidestep the hard problem of training people to be NOT biased completely.
To make informed decisions in this area, notoriously hard to get right research is required, and there just isn’t enough . One of the most helpful things companies can do is help fund this research, and observe and internalize the outcomes of the research (even if they disagree with preconceived notions). This is the only way to build rigor around the conversations about diversity.
Programs like Affirmative Action and diversity-driven hiring are controversial. At the same time, there are lots of places where you can (sometimes overwhelmingly) feel the good ol' boys club (for whatever “boys” means, maybe it’s women in nursing, or it’s men in finance, fill that in yourself) is in full swing, and you’re being intentionally/unintentionally discriminated against for “culuture fit”. Rather than trying to navigate that minefield with your company’s hiring practices, fight diversity problems at the source.
Should demographics of a certain industry/job mirror the population? I don’t know, but I bet doing some research would help. Assuming you’ve done step #2, at this point you should be able to identify where large swaths of the population are becoming disinterested/discouraged from pursuing/entering your industry. This could be a ton of things, with some examples:
The actual issues will differ by company, state, country, and lots of other factors but it’s up to the company to come up with a plan to fix these at the source. This is much more morally and legally conscionable than just aiming for X number of minorities in some given field. I’ve always found that goal to be somewhat disingenuous, it’s like you’re collecting diverse people simply because they’re diverse, and once you have enough you can stop collecting them.
A lot of care must be taken in performing this step to avoid additional discrimination, how to do that is a long nuanced conversation that I’d rather not get into.
At just about every big or mid-size tech company, any stint of employment starts with (mis)conduct and “culture” training. While I find them mostly to be a stupid chore, they’re necessary – just imagine the case that you get an employee (however unlikely it is) that just hasn’t ever through critically about how to conduct themselves in a professional setting. conduct and culture training gives your company a chance to set the record straight on how employees are expected to behave. I pretty firmly believe companies do this mostly to reduce their own legal liabilities (and HR exists to protect the company from humans), but I thikn setting the baseline is important and beneficial regardless, not everyone’s expectations are the same.
The point here is that a course on “how to interact with coworkers that are different from you” should become a part of the mandatory conduct training, if it isn’t already. It’s possible to create a guide to interacting with coworkers that the minimal amount of moralizing the issue and does the maximum amount of giving easy to practice/perform tips on examining internal biases and sussing out potential misconduct, in ourselves and others. Basically, what I’m proposing is a simple dont-be-any-of-these
A simple dont-be-any-of-these
Everything after the base level of dont-be-any-of-these
So by just doing these 4 things would diversity in tech (or any other field) get instantly resolved? probably not. I do however think we can get a lot further than all the progress incessant hand-wringing and debates is getting us today.
Re-training people to reduce their biases is hard, and probably impossible/fraught with failure in the end. Keep trying if you want, but it might be easier to just not give people a chance to use their biases, with a combination of anonymization and properly funded research.
Are these ideas outlandish? Ridiculous? Doomed to fail? Email me or join the discussion on hackernews