Artificially Intelligent—At the Intersection of Bots, Equity, and Innovation

This article was written in collaboration with my wife Elizabeth. We wrote this together and the ideas were generated during some of the great discussions we had during our evening 5k runs.

We all remember Prime Minister Trudeau’s famous response when asked about his gender equity promise for filling roles in the cabinet: “because it’s 2015.” And really, this call to action comes quite late in the historical span of modernity, but we’re glad someone at the highest levels of government in a developed nation has strongly proclaimed it. Most of us in Canada and likely around the world, were pleased to see Trudeau had staffed his cabinet with a significant amount of female leaders in important decision-making roles. And now, it’s 2017–a year that has been pivotal to say the least. Last Spring, Canada’s Minister of Science, Dr. Kirsty Duncan announced that universities in Canada are now required to improve their processes for hiring Canada Research Chairs and ensure those practices and review plans  are equitable, diverse and inclusive. The government of Canada’s announcement is a call to action to include more women and other underrepresented groups at these levels, and it’s essentially come down to ultimatum: research universities will simply not receive federal funding allocations for these programs unless they take equity, diversity, and inclusion seriously in their recruitment and review processes.

IMG_20161205_072820602When placed under the spotlight, the situation is a national embarrassment. Currently there is one woman Canada Excellence in Research Chair in this country and for women entrepreneurs the statistics are not much better. Women innovators in the industrial or entrepreneurial sphere are often left hanging without a financial net, largely as a result of a lack of overall support in business environments and major gaps in policy and funding. The good news is that change is happening now, and it’s affecting policies and practices at basic funding and policy levels. Federal and Provincial research granting agencies in Canada are actively responding to the call for more equitable and inclusive review practices within the majority their programs. The message is clear from the current Canadian government: get on board with your EDI policies and practices, or your boat won’t leave the harbour. But there’s always more work to be done.

The Robot Revolution

Combined with our pivotal political moment in history and on-going necessity for a level playing field for underrepresented groups, humans are situated at a crossroads of theory and praxis of human-machine interaction. The current intersection of human and machine certainly has critical implications for the academy, innovation, and our workplaces. It exposes the gaps to see what is possible, and we know the tools are here and must be harnessed for change. Even though we are literally living through mini “revolutions” each day as new technologies, platforms and code stream before our very eyes, humanity has been standing at this major intersection for a couple of centuries or more–at the very least, since the advent of non-human technologies that help humans process information and communicate ideas (cave paintings, the book, the typewriter, Herb Simon’s General Problem Solver). The human-AI link we need to critically assess now however, is how this convergence of the human-machine can work for women and underrepresented groups in the academy and entrepreneurial sectors in powerful ways. When it comes to creating more equitable spaces and providing women with the pay they deserve, we need to move beyond gloomy statements like “the robots are taking our jobs.” We must seek to understand how underrepresented and underpaid people can benefit from robots rather than running from them. And we must seek to understand why women in the academy, industry and other sectors haven’t been using the AI tools in dynamic ways all along. [Some are of course. As evidenced here. Two women business owners harnessed the power of technology to grow their client and customer base by sending emails from a fictional business partner named “Keith.” Client response to “Keith” seemed to do the trick in getting their customers and backers to take them seriously.]

Implicit Bias

In the psychology of decision making, a bias is usually defined as tendency to make decisions in a particular way. In many cases, the bias can he helpful and adaptive: we all have a bias to avoid painful situations. In other cases the bias can lead us to ignore information that would result in a better decision. An implicit bias refers to a bias that we are unaware of or the unconscious application of a bias that we are aware of. The construct has been investigated in how people apply stereotypes. For example, if you instinctively cross the street to avoid walking past a person of a different race or ethnic group, you are letting an implicit bias direct your behaviours. If you instinctively tend to doubt that a woman who takes a sick day is really sick, but tend to believe the same of a man, you are letting an implicit bias direct your behaviours. Implicit bias has been shown to also affect hiring decisions, teaching evaluations. Grants that are submitted by women scientists often receive lower scores and implicit bias is the most likely culprit.  Implicit bias is difficult to avoid because it is implicit. The effect occurs without us being aware of it happening. We can overcome these biases if we are able to be more aware that they are happening. But AI also offers a possible way to overcome these biases as well.

An Engine for Equity at Work

AI and fast-evolving technologies can and should be used by women right now. We need to understand how they can be harnessed to create balanced workplaces, generate opportunity in business, and improve how we make decisions that directly affect women’s advancement and recognition in the academe. What promise or usefulness do AI tools hold for the development of balanced and inclusive forms of governance, review panel practices, opportunities for career advancement and recognition, and funding for start-ups? How can we use the power of these potent and disruptive technologies to improve processes and structures in the academy and elsewhere to make them more equitable and inclusive of all voices? There’s no denying that the tech space is changing things rapidly, but what is most useful to us now for correcting or improving imbalances or fixing inequitable, crumbling, and un-useful patriarchal structures. We need a map to navigate the  intersection of rapid tech development and human-machine interaction and use AI effectively to reduce cognitive and unconscious biases in our decision-making; to improve the way we conduct and promote academic research, innovation and governance for women and underrepresented groups of people.


Some forward thinking companies are using the approach now. For example, several startups are using AI to prescreen candidates for possible interviews. In one case, the software (Talent Sonar) structured interviews and extracts candidate qualifications and removes candidate’s names and gender information from the report. These algorithms are designed to help remove implicit bias in hiring by focusing on the candidate’s attributes and workplace competencies without any reference to gender. Companies relying on these kinds of AI algorithms report a notable increase in hiring women. Artificial Intelligence, far from replacing workers, is actually helping to diversify and improve the modern workforce.

Academics have seen this change coming. Donna Haraway, in her Cyborg Manifesto re-conceptualizes modern feminist theory through a radical critique of the relationship between biology, gender, and cybernetics. For Haraway, a focus on the cybernetic–or the artificially intelligent–removes the reliance on gender in changing the way we think about power and how we make decisions about what a person has achieved, or is capable of doing. Can we, for example, start to aggressively incorporate AI methods for removing implicit or explicit bias from grant review panels–or more radically, remove humans from the process entirely? When governing boards place their votes for who will sit on the next Board of Trustees, or when university review committees adjudicate a female colleague’s tenure file in the academy, could this not be done via AI mechanisms or with an application that eliminates gender and uses keyword recognition for assessing the criteria? When we use AI to improve our decision making, we also have the ability to make it more equitable, diverse and inclusive. We can remove implicit or explicit cognitive biases based on gender or orientation, for example, when we are deciding who will be included in the next prestigious cohort of Canada Research Chairs.

AI can, and will continue to change the way human work is recognized in progressive ways: recognition of alternative work during parental leaves, improved governance and funding models, construction of equitable budgets and policy, and enhanced support for women entrepreneurs and innovators. AI is genderless. It is non-hierarchical. It has the power to be tossed like a dynamite stick to disrupt ancient academic structures that inherently favour patriarchal models for advancing up the tenure track. Equalization via AI gives women and underrepresented groups the power to be fully recognized and supported, from the seeds of their innovation (the academy) to the mobilization of those ideas in entrepreneurial spaces. The  robots are in fact still working for us–at least, for now.


The Fine Print in the Syllabus

The end of July brings the realization that that I’ll be teaching graduate and undergraduate courses again in the fall, and that I need to prepare readings, lectures, and an official course outline for each course. In addition to being distributed to students on the first day of class, these outlines are archived and publicly available on the web. For example, here is the outline for the summer distance course that I am teaching this year . Here is the outline from the last time I taught the Introduction to Cognition course. My graduate courses use a similar format, and here is the outline from last fall’s graduate seminar on cognition. As you can see, there is a lot of information about the course, but also a lot of slightly silly stuff directing them to websites about other policies.

Fine Print

Every year, when I send these course outlines to the department’s undergraduate coordinator, I am informed that I have used the wrong template or have forgotten something.

For example. Last year, I forgot this:

“Computer-marked multiple-choice tests and/or exams may be subject to submission for similarity review by software that will check for unusual coincidences in answer patterns that may indicate cheating.”

Do the students need to know this up front? Is it not enough that we tell them not to cheat? Can they file an appeal if they were caught cheating and did not know that I was going to check ?

Not So Fine Print

Every year the list of non academic information that is required gets longer and longer. For example, this year I forgot to include a mental health statement. According to the university, I need to include the following statement in all course outlines:

“If you or someone you know is experiencing emotional /mental distress, there are several resources here at Western to assist you.  Please visit: for more information on these resources and on mental health.”

I think this is a very strange thing to have in a course outline. It has nothing to do with my class. Surely students already know about non academic services, like mental health services?  And why stop there, maybe I should also consider a referral to the student health services if they or someone they know is experiencing a pain in their foot? Or to the gym if they are experiencing weakness in the upper torso? Or to a cooking class if they are malnourished. We have not yet been asked to issue “trigger warnings”  but I know that’s probably coming…

What is the intent here? I’m not suggesting that student not be informed of all the options available to them in terms of university life. I just wonder how relevant it is to the course outline. I id not think this kind of information belongs in my course outline.

Is it about control?

I think much of this is about the university exerting top down control. Requiring a series of statements for each course outline is a subtle power play. Academics sometimes like feel immune to the “TPS report” mentality, but we get it, and it gets worse each year.

In 2003, when I began teaching at Western, I created a syllabus, handed it out, taught the class, turned in the grades. Now, 11 years later, I use information from an official template for the syllabus, I send it for “approval” by the undergraduate office (it might be sent back), I sent it to IT to be posted, I  teach the course, I approve alternative exam dates at the request of the academic counsellor, when I turn in the grades, these are checked also to make sure they are not too high or too low. Ten years ago we had a chair… Now we have a chair plus 2 1/2 associate chairs (I was one of them for 4 years). Ten years ago, departments ran nearly every aspect of their own graduate programs, Now we have a central authority that has control over how exams are run, the thesis, and even the specific offer of admission. The letter that we write to students to offer admission to our graduate program is from a template, and any changes must to be approved. This letter gets longer and more confusing each year. It’s our TPS report. One of many TPS reports.

University, Inc.

The university is a business. I know it, everyone knows it.

Every year, we are informed  that we need to meet targets for enrolment, to put “bums in seats”. We are required to continually be seeking external funding and grants, to teach courses that will have appeal to student registered in different programs, to attach more graduate students. We’ve been asked to “sex up” the title of a course to see if more people will sign up. We now need to report on “internationalization” activities. That’s a buzzword, folks. We’re doing buzzword reports.

I’m not naive, I know the pressures. I’m just disappointed. And worried that it’s getting worse each year.


When is it OK to steal?

Cheap Fares

Several recent news items caught my attention over this holiday weekend. In the first, it was reported that Delta airlines sold many tickets at a ridiculously low fare, and will honor those ticket sales regardless (they may have no choice because of a federal law requiring truth in advertising for airlines). As word of the cheap fares spread on Twitter and Facebook, people flocked to Delta’s website to buy these cheap tickets, even as they were aware that the pricing was an error.  They were very cheap: “A roundtrip flight between Cincinnati and Minneapolis for February was being sold for just $25.05 and a roundtrip between Cincinnati and Salt Lake City for $48.41. The correct price for both of those fares is more than $400.” In essence, they took advantage of an error and got something for much less than market value.

People did not seem to mind, and viewed this is ethically defensible. Consider this sampling of comments after an NPR article about this event: “The published price is the published price. It’s not like the passenger hacked into the system. Come on folks!” or “How did this airline’s mistake turn into us being unethical for buying their tickets? Their normal prices are the only unethical part of this situation.” or “They change their prices minute-to-minute based on their hidden math. If that math turns out to be wrong, I’ll sleep just fine.”

So most people, it seems, would buy the ticket, would not feel bad, and would not feel that they had made an unethical decision. I will come clean as well. I would absolutely buy a pricing error ticket, even if I knew it was an error.

Wells Fargo Kills a Homeowner

The second item was posted on Facebook by a friend of mine. It seems that years ago, a simple typographical error by Wells Fargo bank created a series of events that ended up with Wells Fargo erroneously foreclosing on a home, even after they realized their mistake. They billed the owner for unpaid property tax (it was actually his neighbor that was behind).  In order to collect the back taxes, they doubled his mortgage payments and as a result, he fell behind in his mortgage, and so they foreclosed and sold his home. Incredibly, Wells Fargo admitted their own error, but rather than correct it, they sold his home anyway! Eventually, the owner literally died in court trying this fight this. Naturally, we are all incredulous. Wells Fargo knew it made a mistake, but foreclosed anyway. They are the epitome of a heartless, cruel corporation and this is all reminiscent of a bureaucratic dystopia like Brazil.

But at the core, are these events really that different?  In each case, a technical error caused the sale to happen. In each case, the items should not have been sold. There was never a real $25 fare to Salt Lake and back taxes were never really owed. In both cases, the buyer benefits and the seller suffers. Why are we almost all OK with Delta being screwed out of airline fare but deeply offended at Wells Fargo? It seems like these events should be equivalent. Wells Fargo should not have sold  the condo, and Delta should not be required to honor those fares.

Clearing the Shelves at Wal-Mart

One might argue that the Wells Fargo case is different, the error was discovered before the sale. And so these events are not equivalent. So a third news story might be a better example. A few months ago, a technical error allowed people on food assistance (food stamps) to use their benefits cards without a limit. The error was made by by Xerox, who runs the EBT program in many states. In Louisiana, customers at a Wal-Mart, realized that the cards had no limit, word spread, and they cleared the shelves. Police were actually called, though Wal-Mart (much like Delta) said they would honour the sales, as no laws were broken. Card users knew this was a result of an error. But unlike  the Delta case, in which people are nearly universally accepting of the cheap fares, the benefits card incident was widely condemned. Comments like “No matter how you cut it, it was theft and those who took advantage should be prosecuted!” or “This is absolutely disgusting and everyone who used one of those cards and “stole” stuff should go to jail and forever have their EBT benefits cancelled for life.”, were common in the news reports.

I’m left with the same questions. Why is it alright to steal a fare from Delta (paying $25 for a $400 fare is stealing $375 worth of air time from them) but it is absolutely not aright to steal from Wal-Mart. Both are big corporations. Neither will be hurt by this minor glitch, and any tiny bit lost revenue would be passed on to another consumer anyway. Does this dichotomy exist because the Delta shoppers were on-line whereas Wal-Mart shoppers were loading up in person? Does this dichotomy exist because the Delta shoppers were from all walks of life, but the Wal-Mart  shoppers were “welfare types”. Is it more OK for a savvy traveler to steal from Delta than a welfare bum to steal from Wal-Mart? Would we condemn the Delta ticket buyers if it were revealed that all were on public assistance?

I do not have a quick answer, though I feel like a properly controlled experiment might be in the works.

Thanks for reading, and thoughtful comments are welcome, of course.