© Provided by The Guardian Photograph: Tero Vesalainen/Alamy By Isabel Woodford , The Guardian Artificial intelligence programmers are dev...
Artificial intelligence programmers are developing bots that can identify digital bullying and sexual harassment.
Known as “#MeTooBots” after the high-profile movement that arose after allegations against the Hollywood producer Harvey Weinstein, the bots can monitor and flag communications between colleagues and are being introduced by companies around the world.
Bot-makers say it is not easy to teach computers what harassment looks like, with its linguistic subtleties and grey lines.
Jay Leib, the chief executive of the Chicago-based AI firm NexLP, said: “I wasn’t aware of all the forms of harassment. I thought it was just talking dirty. It comes in so many different ways. It might be 15 messages … it could be racy photos.”
Nex’s AI platform is used by more than 50 corporate clients, including law firms in London.
The industry is a potentially fertile ground for the bots to examine: a third of female lawyers in Britain report having experienced sexual harassment.
The bot uses an algorithm trained to identify potential bullying, including sexual harassment, in company documents, emails and chat. Data is analysed for various indicators that determine how likely it is to be a problem, with anything the AI reads as being potentially problematic then sent to a lawyer or HR manager to investigate.
Exactly what indicators are deemed red flags remains a company secret, but Leib said the bot looked for anomalies in the language, frequency or timing of communication patterns across weeks, while constantly learning how to spot harassment.
Leib believes other industries could also benefit. “There’s a lot of interest from clients across sectors such as financial services, pharmaceuticals,” he said.
Prof Brian Subirana, a lecturer in AI at Harvard and MIT, said the idea of using AI to root out harassment was promising though the bots’ capabilities were limited.
“There’s a type of harassment that is very subtle and very hard to pick up. We have these training courses [about harassment] at Harvard, and it requires the type of understanding that AI is not yet capable of,” he said.
The underlying issue is that AI can only reliably conduct basic story analysis, meaning it is taught to look for specific triggers. It cannot go beyond that parameter and cannot pick up on broader cultural or unique interpersonal dynamics. This means the bots risk leaving gaps or proving oversensitive.
“We don’t know when AI will break the ‘story understanding’ frontier,” Subirana said.
He added that if employees’ correspondence gets flagged, it can create a climate of distrust, and offenders may learn how to trick the software. Alternatively, offenders could resort to other mediums of communication that are not monitored by the bots.
A further concern is protecting the confidentiality of the data that is collected. Subirana said if the software made a mistake and the data was leaked, internal communications between employees could be seen by rival companies.
“There are still hurdles to jump before AI can do what we think it can,” Subirana said. “I haven’t heard of any HR who has said it is useful for this yet.”
He is not the only sceptic. Sam Smethers, the chief executive of the Fawcett Society, a women’s rights NGO, pointed out there could be unforeseen consequences from policing staff’s communications.
“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees,” she said.
“It has implications for the privacy of staff and could be abused”, she added. She suggested educating staff about appropriate attitudes and behaviours would be more effective in protecting potential victims.
Despite the concerns, Subirana believes #MeTooBots could offer indirect benefits. “The use case that I could imagine is a training one. It could provide a database [of problematic messages],” he said.
He added that believing communications were being monitored could make people less prone to harass colleagues, in a placebo known as the Hawthorne effect. “There is a preventive element here,” he said.
Similar technology is being used to retrospectively scour large volumes of digital communications to fight harassment claims.
One law firm using the technology is Morgan Lewis, which specialises in US employee and labour practice. But instead of monitoring employees, the AI is used to analyse clients’ past communications.
“We’ve used this in dozens of cases,” said Tess Blair, a partner at the firm. She said the tech usually helped build a case rather than providing the smoking gun.
Another AI startup, Spot, has created a chatbot that allows employees to anonymously report sexual harassment allegations. It is trained to give advice and asks sensitive questions to further an investigation into the alleged harassment, which may have played out digitally or physically. Spot aims to account for gaps in HR teams’ abilities to deal with such issues sensitively, while also preserving anonymity.
These variants of AI could work together to detect workplace harassment more effectively, Blair said. Tools such as Spot can be deployed before lawyers or HR staff are involved. If a full investigation proceeds, technology like Morgan Lewis’s can analyse their clients’ digital communications to build a case.
But however good the #MeTooBots become, Blair sees their role as assistants to humans rather than an all-seeing judge, jury and executioner.
“Computers are not value-judging, they are saying ‘this doesn’t fit the pattern’,” she said. “It is then up to us to interpret.”
Jay Leib, the chief executive of the Chicago-based AI firm NexLP, said: “I wasn’t aware of all the forms of harassment. I thought it was just talking dirty. It comes in so many different ways. It might be 15 messages … it could be racy photos.”
Nex’s AI platform is used by more than 50 corporate clients, including law firms in London.
The industry is a potentially fertile ground for the bots to examine: a third of female lawyers in Britain report having experienced sexual harassment.
The bot uses an algorithm trained to identify potential bullying, including sexual harassment, in company documents, emails and chat. Data is analysed for various indicators that determine how likely it is to be a problem, with anything the AI reads as being potentially problematic then sent to a lawyer or HR manager to investigate.
Exactly what indicators are deemed red flags remains a company secret, but Leib said the bot looked for anomalies in the language, frequency or timing of communication patterns across weeks, while constantly learning how to spot harassment.
Leib believes other industries could also benefit. “There’s a lot of interest from clients across sectors such as financial services, pharmaceuticals,” he said.
Prof Brian Subirana, a lecturer in AI at Harvard and MIT, said the idea of using AI to root out harassment was promising though the bots’ capabilities were limited.
“There’s a type of harassment that is very subtle and very hard to pick up. We have these training courses [about harassment] at Harvard, and it requires the type of understanding that AI is not yet capable of,” he said.
The underlying issue is that AI can only reliably conduct basic story analysis, meaning it is taught to look for specific triggers. It cannot go beyond that parameter and cannot pick up on broader cultural or unique interpersonal dynamics. This means the bots risk leaving gaps or proving oversensitive.
“We don’t know when AI will break the ‘story understanding’ frontier,” Subirana said.
He added that if employees’ correspondence gets flagged, it can create a climate of distrust, and offenders may learn how to trick the software. Alternatively, offenders could resort to other mediums of communication that are not monitored by the bots.
A further concern is protecting the confidentiality of the data that is collected. Subirana said if the software made a mistake and the data was leaked, internal communications between employees could be seen by rival companies.
“There are still hurdles to jump before AI can do what we think it can,” Subirana said. “I haven’t heard of any HR who has said it is useful for this yet.”
He is not the only sceptic. Sam Smethers, the chief executive of the Fawcett Society, a women’s rights NGO, pointed out there could be unforeseen consequences from policing staff’s communications.
“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees,” she said.
“It has implications for the privacy of staff and could be abused”, she added. She suggested educating staff about appropriate attitudes and behaviours would be more effective in protecting potential victims.
Despite the concerns, Subirana believes #MeTooBots could offer indirect benefits. “The use case that I could imagine is a training one. It could provide a database [of problematic messages],” he said.
He added that believing communications were being monitored could make people less prone to harass colleagues, in a placebo known as the Hawthorne effect. “There is a preventive element here,” he said.
Similar technology is being used to retrospectively scour large volumes of digital communications to fight harassment claims.
One law firm using the technology is Morgan Lewis, which specialises in US employee and labour practice. But instead of monitoring employees, the AI is used to analyse clients’ past communications.
“We’ve used this in dozens of cases,” said Tess Blair, a partner at the firm. She said the tech usually helped build a case rather than providing the smoking gun.
Another AI startup, Spot, has created a chatbot that allows employees to anonymously report sexual harassment allegations. It is trained to give advice and asks sensitive questions to further an investigation into the alleged harassment, which may have played out digitally or physically. Spot aims to account for gaps in HR teams’ abilities to deal with such issues sensitively, while also preserving anonymity.
These variants of AI could work together to detect workplace harassment more effectively, Blair said. Tools such as Spot can be deployed before lawyers or HR staff are involved. If a full investigation proceeds, technology like Morgan Lewis’s can analyse their clients’ digital communications to build a case.
But however good the #MeTooBots become, Blair sees their role as assistants to humans rather than an all-seeing judge, jury and executioner.
“Computers are not value-judging, they are saying ‘this doesn’t fit the pattern’,” she said. “It is then up to us to interpret.”
COMMENTS